Nation Topics - Books and Ideas
America as Empire Autobiography and Memoir Biography Civil Rights Movement Crossword Cultural Criticism and Analysis Essays Fiction History Humor Letters Lexicography Linguistics Literacy and Reading Literary Criticism Literature Nation History Non-fiction Patriotism Philosophy Poetry Publishing Industry Slavery in America
News and Features
The buildings' wounds are what I can't forget;
though nothing could absorb my sense of loss,
I stared into their blackness, what was not
supposed to be there, billowing of soot
and ragged maw of splintered steel, glass.
The buildings' wounds are what I can't forget,
the people dropping past them, fleeting spots
approaching death as if concerned with grace.
I stared into the blackness, what was not
inhuman, since by men's hands they were wrought;
reflected on the TV's screen, my face
upon the building's wounds. I can't forget
this rage, I don't know what to do with it--
it's in my nightmares, towers, plumes of dust,
a staring in the blackness. What was not
conceivable is now our every thought:
We fear the enemy is all of us.
The buildings' wounds are what I can't forget.
I stared into their blackness, what was not.
Late in the evening in back-road America you tend to pick the motels with a few cars parked in front of the rooms. There's nothing less appealing than an empty courtyard, with maybe Jeffrey Dahmer or Norman Bates waiting to greet you in the reception office. The all-night clerk at the Lincoln motel (three cars out front) in Austin, Nevada, who checked me in around 11:30 pm last week told me she was 81, and putting in two part-time jobs, the other at the library, to help her pay her heating bills since she couldn't make it on her Social Security.
She imparted this info without self-pity as she took my $29.50, saying that business in Austin last fall had been brisk and that the fifty-seven motel beds available in the old mining town had been filled by crews laying fiber optic cable along the side of the road, which in the case of Austin meant putting it twenty feet under the graveyard that skirts the road just west of town.
Earlier that day, driving from Utah through the Great Basin along US 50, famed as "the loneliest road," I'd seen these cables, blue and green and maybe two inches in diameter, sticking out of the ground on the outskirts of Ely, as if despairing at the prospect of the Great Salt Lake desert stretching ahead.
So we can run fiber optic cable through the Western deserts but not put enough money in the hands of 81-year-olds so they don't have to pull all-night shifts clerking in motels. What else is new?
People who drive or lecture their way through the American interior usually notice the same thing, which is that you can have rational conversations with people about the Middle East, about George W. Bush and other topics certain to arouse unreasoning passion among sophisticates on either coast. Robert Fisk describes exactly this experience in a recent piece for The Independent, for which he works as a renowned reporter and commentator on mostly Middle Eastern affairs.
Fisk claims on the basis of a sympathetic hearing for his analysis--unsparing of Sharon's current rampages--on campuses in Iowa and elsewhere in the Midwest that things are now changing in Middle America. After twenty-five years of zigzagging my way across the states I can't say I agree. It's always been like that, and even though polls purport to establish that a high percentage of all Middle Americans claim to have had personal exchanges with Jesus and reckon George W. to be the reincarnation of Abe Lincoln, the reality is otherwise. Twenty years ago Fisk would have met with lucid views in Iowa on the Palestinian question, plus objective assessments of the man billed at that time as Lincoln's reincarnation, Ronald Reagan.
Some attitudes do change. White people are more afraid of cops than they used to be. A good old boy in South Carolina I've bought classic cars from for a quarter of a century was a proud special constable back in the early eighties. These days if a police cruiser passes him on the highway, he'll turn off at the next intersection and take another road. Reason: A few years ago a couple of state cops stopped him late at night, frisked him, accused him of being drunk. This profoundly religious Baptist told them truthfully he'd never consumed alcohol in his life. Then they said he must be a drug dealer. He reckons the only reason they didn't plant some cocaine in his car was that he told them to check him out with the local police chief, an old friend.
I know from the stats that a lot of Americans are poor, so how come I'm often the only fellow on the road, or in town, in an old car aside from some of the Mexican fieldworkers in California for whom such cars are home? Most everyone seems to be in a late-model pickup or at least a nice new Honda Civic. I know, I know. The poor are out there, lots of them, but the whole place just doesn't seem to feel as poor as it often did in the early eighties recession. Then, day after day you could drive through towns that felt like graveyards, with no prospect of fiber optic cable.
Take Grants on I-40 in New Mexico, west of Albuquerque, which became the nation's self-proclaimed "uranium capital" in the fifties after Paddy Martinez heard descriptions of what uranium ore looked like and led the mining prospectors to the yellow rocks he'd been looking at down the years. The mines closed, and I recall from the early 1980s Grants looking sadly becalmed, with its Uranium Café and souvenir motels from the great days of Route 66. The audio in the Mining Museum still speaks plaintively about radiation's bad rep, despite the fact that in modest amounts it's good for you and there was much more of it around when the world was young.
Well, 66 nostalgia is still strong in Grants, but aside from the Lee Ranch coal mine the juice in Grants's economy now comes in large part from three prisons--one fed, one state and one private.
No wonder people are nervous of cops. There are so many prisons for the cops to send you to. So many roads where a sign suddenly comes into view, advertising correctional facility and warning against hitchhikers. I was driving through Lake Valley in eastern Nevada along US 93, with Mount Wheeler looking to the east. Listening to the radio and Powell's grotesque meanderings I was thinking, Why not just relocate the whole West Bank to this bit of Nevada where the Palestinians could have their state at last, financed by a modest tax on the gambling industry? The spaces are so vast you wouldn't even need a fence. Then reality returned in the form of the usual sign heralding a prison round the next bend.
West along US 50 from Austin I came to Grimes Point, site of fine petroglyphs. A sign informed me that "The act of making a petroglyph was a ritual performed by a group leader. Evidence suggests that there existed a powerful taboo against doodling." The graffiti problem. Some things never change. On the other hand, some things do. Many thousands of years ago those rocks were on the edge of a vast sea, maybe 700 feet deep. The petroglyph ridge was once beachfront property. The world was warmer then, and we're heading that way once more, from natural causes. To leave you on an upbeat note: At least natural radiation is on the wane.
More than the much-reviled products of Big Tobacco, big helpings and Big Food constitute the number-one threat to America's children, especially when the fare is helpings of fats, sugars and salt. Yet the nation so concerned about protecting kids from nefarious images on library computers also
allows its schools to bombard them with food and snack ads on Channel One and to sign exclusive deals with purveyors of habit-forming, tooth-rotting, waist-swelling soft drinks.
Foreigners who arrive in the United States often remark on the national obsessions about food and money. It is perhaps not surprising that a gluttonous mammon would rule the federal regulators of our food chain, but Marion Nestle, professor of nutrition at New York University, confesses that she has heard few of her nutritionist colleagues discuss the cardinal point: "Food companies will make and market any product that sells, regardless of its nutritional value or its effect on health."
Nestle goes on to demonstrate that not only do food companies use traditional corporate checkbook clout with Congress to insure their unfettered right to make money; they also co-opt much of the scientific and nutritional establishment to aid in their efforts. For example, the omnipresent "milk mustache" advertisements often show blacks and Asians--precisely those who are most likely to be lactose-intolerant. But then "science" rides to the rescue: There are a lot more research dollars shunted to those arguing that lactose intolerance is not a problem than there are for those who think otherwise. In fact, the Physicians' Committee for Responsible Medicine sued to annul the federal dietary guidelines, which recommended two to three servings of milk products daily; six of the eleven people on the voting committee had received research grants, lectureships or other support from the food industry.
Here Nestle wobbles a little in her argument, however. She waves the standard of science on behalf of the Food and Drug Administration when it comes to food supplements and herbal medicines, but devalues the "science" as well by revealing the conflicts of interest among researchers and regulators. Science is often up for sale. Researchers go to the food corporations for the same reason that bandits rob banks: That's where the money is, not least since the FDA's own research funding is controlled by Congressional committees in charge of agriculture, whose primary aim is hardly to promote dieting--it is the force-feeding of agribusiness with federal funds. Indeed, Nestle concedes, "USDA officials believe that really encouraging people to follow dietary guidelines would be so expensive and disruptive to the agricultural economy as to create impossible political barriers."
The dietary guidelines Nestle is referring to were monumentalized in the famous "food pyramid" familiar to every primary school student. But the pharaohs finished theirs in less time than it took the FDA to pilot its version past the army of lobbyists who resented the hierarchical implication that some foods were healthier than others. As a whistleblower on the FDA advisory committee that was drawing up the guidelines, Nestle is well qualified to recount the obstacles it faced.
In fact, many people did become more health-conscious as a result of such guidelines, but as Nation readers know, practice does not always match theory. Much of Food Politics reveals how the food industry has seized upon the marketing possibilities of consumers' safety concerns and perverted them by adding supplements to junk foods and then making health claims for the products.
Food is an elemental subject, on a par with sex and religion for the strength of people's beliefs about it. Otherwise rational people have no difficulty believing the impossible during breakfast, where their stomachs are concerned. Big Food relies on that snake-oil factor, the scientific illiteracy of most consumers. For example, marketers are happy with the advice to eat less saturated fat, since most buyers won't recognize it when it's drizzled across their salad. But advice to eat less of anything recognizable stirs up serious political opposition.
Federal dietary guidelines recommending that we "eat less" were thinned down to suggesting that we "avoid too much," which metabolized into "choose a diet low in..." And so on. For example, Nestle relates how in 1977 the National Cattlemen's Association jumped on Bob Dole's compromise wording on reducing red meat in the diet and increasing lean meat consumption: "Decrease is a bad word, Senator," the cattlemen warned him. The cowboys effectively corralled the McGovern committee on dietary guidelines: "Decrease consumption of meat" was fattened into "choose meats, poultry and fish which will reduce saturated fat intake."
Sometimes the more potential for harm, the more it seems likely that a product's positive--or putative--health benefits will be touted. We get vitamin-supplemented Gummi Bears and, what provokes Nestle's justifiable ire most, Froot Loops. This marshmallow blasted "cereal...contains no fruit and no fiber" and "53% of the calories come from added sugar," she inveighs. The perfect breakfast complement to a twenty-ounce bottle of cola that will be downed in school? Such pseudo-foods occupy the very top of the food pyramid, which characterizes them as to be used sparingly, or rather, only use if you have good dental insurance.
As Nestle points out, health warnings on alcohol and tobacco have done little to stop consumers. But picture a tobacco company allowed to sell cigarettes as "healthier" or "with added vitamins." (Indeed, she details a campaign by the alcohol companies to get Congress to allow them to market their products as healthy elixirs until Strom Thurmond's religious principles outweighed his conservatism enough for him to help shoot down the proposal.)
I was mildly surprised that Nestle does not comment on the imprecise use of "serving" information on food packaging. As a longtime student of labels, I find that the unhealthiest foods seem to have incredibly small "servings" compared with what consumers actually eat or drink. For the USDA, one slice of white bread or one ounce of breakfast cereal is a "serving" of grain, and nutritional data such as caloric content are rendered "per serving." A cinema-size actual serving of soda may contain 800 calories in sugar, before you get down to the buttered popcorn, not to mention the Big Mac before or after.
Food marketers are hardly breaking people's arms to persuade them to eat this stuff, of course. It is, after all, a great American principle that you can have your cake, eat it and slim down at the same time. What Nestle calls "techno-foods"--those labeled "healthier," "less fat," "lite," "more fiber"--pander to the health consciousness of a generation that will do anything to lose weight and live longer, except eat less.
The ultimate example of food marketing has to be Olestra, the cooking fat that passes through the gut undigested. Its maker, Proctor & Gamble, has spent up to $500 million on it, and spent twenty-seven years of the FDA's time getting various approvals, while it kept trying to remove the mandated health warning that the product could cause cramping, loose stools and block the absorption of fat-soluble vitamins. P&G should count its blessings. A Center for Science in the Public Interest report says: "Olestra sometimes causes underwear staining. That phenomenon may be caused most commonly by greasy, hard-to-wipe-off fecal matter, but occasionally also from anal leakage (leakage of liquid Olestra through the anal sphincter)."
By 1998 Proctor & Gamble disingenuously claimed that 250 tons, or four railcarfuls, of fat had not been added to American waistlines. No one claimed it had--what the company meant was that was how much Olestra had been used to fry chips. The public expectations were quite high, though; Nestle says that "people also were disappointed that the chips did not help them lose weight." Indeed, she reports that some ended up with more calories from eating Olestra-fried chips than they would have from other kinds, because they consumed a higher volume, convinced that they were calorie-free, though of course they were not.
But given the amount of money involved and the way food-industry/scientific-community connections are structured, "it is virtually impossible for any nutritionist interested in the benefits and risks of Olestra to avoid some sort of financial relationship with P&G unless one systematically refuses all speaking invitations, travel reimbursements, honoraria and meals from outside parties," Nestle observes.
In yet another case of Big Food getting its way, Nestle chronicles how the State Department came to declare that signing the World Health Organization/UNICEF international code on marketing of baby formula would flout the Constitution. "Inasmuch as this explanation strains credulity," Nestle suggests, the real reason was lobbying by US formula companies. The formula makers are fighting a war of attrition against mother's milk, in other words, not just here but internationally.
A more recent case involves the coalition that forced the FDA to allow claims of benefits from untested herbal supplements. I wish Nestle had gone into more detail about the sociology of this mélange of New Age alternative-medicine users, libertarian types and those who mistrust the medical establishment. Groups like Citizens for Health and the Corporate Alliance for Integrative Medicine rallied behind the rapidly growing corporations to ram a suppository up the FDA and its power to control sales of what on occasion have proven to be fatally flawed "alternative" remedies for everything from impotence to Alzheimer's. As she quotes an FDA commissioner, "[We] are literally back at the turn of the century, when snake oil salesmen made claims for their products that could not be substantiated." She reports claims that 12 percent of users of herbal medicines, or about 12 million people, suffer from some kind of adverse effect.
People may feel better when they take supplements, but should health officials use "feelings" as a basis for regulatory measures, she asks? Or should the FDA instead "take the lead in reenergizing a crucial phase of its basic mission to promote honest, rational scientific medicine by vigorously combating its opposite"?
Many people may want to know what "science" is. Is it corporate-sponsored research, or the AMA defending its professional turf with the same vigor with which it has traditionally fought "socialized medicine"? Nestle shows how the American Academy of Pediatrics tried to insure that highly profitable baby formula flowed through its hands and rallied against direct sales to mothers. Was that concern for the "client" or concern for professional prerogatives?
Perhaps Nestle should have been more polemical. The food supplement row raised the question of "whether irreparable damage has been done to the ability of our federal regulatory system to ensure the safety of foods and supplements and to balance public health interests against the economic interests of corporations," she writes. But her own reporting suggests that the barbarians are already inside the gates and forcing their wares on the gullible.
Nestle sees no magic bullet to retrieve the situation. She wants "some federal system to guarantee that all those products on the shelves are safe and effective," and she asks, "Shouldn't there be some regulatory framework to control patently absurd or misleading claims?" To answer that in the affirmative is not necessarily the same as agreeing that the FDA is the best agency, certainly in its present form, nor that the AMA and similar organizations are in the corner of good science. The FDA's record does not inspire confidence, which is one of the reasons the herbalists' revolt was so successful in Congress. Its arrogance often matches its ignorance. While reading this book I went to a small British-owned cholesterol shop in Manhattan (pork pies, etc.). Its owner can't import kippers because the FDA does not recognize them as food. His first shipment of a brand of British Band-Aids was held on suspicion of being a soup, and when that confusion was finally cleared up, the FDA demanded of him a medical-goods import license.
I would like to hear more about how the FDA could be made more responsive and more efficient. It seems that in their present form, the regulatory bodies need some means of democratic oversight to check bureaucracy and to weigh problems of undue influence from the producing industries. Nestle details problems we've come to see elsewhere: the revolving door between civil servants, Congressional staff and industry. She also suggests rules--"a higher and stronger 'firewall'" between regulatory agencies and industry to inhibit the easy career glide from poaching to gamekeeping and back again--and she is entirely correct that the last bodies that should be overlooking FDA funding are the Congressional agriculture committees, which are dedicated to the prosperity of agribusiness.
Otherwise, Nestle's wish list ranges from sensible to Mission Impossible: tighter labeling rules so people can see exactly what they are consuming. A ban on advertising of junk foods in schools, especially candies and soft drinks with high sugar content. Sumptuary taxes on soft drinks as well--sure to be opposed bitterly by the lobbyists. If alcohol and tobacco advertisements cannot be allowed on children's TV, why allow advertising of foods that promote obesity and future health ills on a par with them?
At first glance, Nestle's call for an ethical standard for food choices for nutritionists and the industry seems highly idealistic; but ten years ago, who would have foreseen Philip Morris's berating of state governments for not spending their tobacco settlement money on the pledged anti-child-smoking campaigns? Already, more and more scientific journals are demanding disclosure of conflicts of interest for papers submitted.
Nestle does not touch the subject directly, but who knows, maybe campaign finance reform really will cut indirectly the pork in the political diet and the crap in the school lunches. However, it will be a hard push. Educating the public is a start, and Food Politics is an excellent introduction to how decisions are made in Washington--and their effects on consumers. Let's hope people take more notice of it than they do of the dietary guidelines.
The third-year medical student held the intravenous catheter, poised to insert it into a patient's vein. Suddenly the patient asked, "Have you done this before?" As the student later recounted to me, a long period of silence fell upon the room. Finally, the student's supervising resident, who was also present, said, "Don't worry. If she misses, I'll do it." Apparently satisfied, the patient let the student proceed.
Breaking this type of uncomfortable silence is the goal of Complications: A Surgeon's Notes on an Imperfect Science by Atul Gawande, a surgical resident and a columnist on medicine for The New Yorker. As Gawande's collection of stories reveals, fallibility, mystery and uncertainty pervade modern medicine. Such issues, Gawande believes, should be discussed openly rather than behind the closed doors of hospital conference rooms.
Complications is surely well timed. In 2000, the Institute of Medicine published "To Err Is Human," a highly charged report claiming that as many as 98,000 Americans die annually as a result of medical mistakes. In the wake of this study, research into the problem of medical error has exploded and politicians, including then-President Bill Clinton, have proposed possible solutions. The message was clear: The silence that has too long characterized medical mistakes is no longer acceptable. Yet while Gawande's book provides great insights into the problem of medical error, it also demonstrates how there can be no quick fix.
What may be most remarkable about the recent obsession with medical error is just how old the problem is. For decades, sociologists have conducted studies on hospital wards, perceptively noting the pervasiveness of errors and the strategies of the medical profession for dealing with them. As sociologist Charles Bosk has shown, doctors have largely policed themselves, deciding what transgressions are significant and how those responsible should be reprimanded. Within the profession, then, there is much discussion. Yet the public was rarely told about operations that went wrong or medications that were given in error. Residents joining the medical fraternity quickly learned to keep quiet.
Indeed, when one of those young physicians decided to go public, he used a pseudonym, "Doctor X." In Intern, published in 1965, the author presented a diary of his internship year, replete with overworked residents, arrogant senior physicians and not a few medical errors. In one instance, a surgeon mistakenly tied off a woman's artery instead of her vein, leading to gangrene and eventual amputation of her leg. Doctor X pondered informing the woman about the error, wondering "just exactly where medical ethics come into a picture like this." But his colleagues convinced him to remain quiet.
One whistleblower willing to use his own name and that of his hospital, New York's Bellevue, was William Nolen. In The Making of a Surgeon, published in 1970, surgeons swagger around the hospital, making derisive comments about patients and flirting relentlessly with nurses. (Not the least of reasons for being nice to nurses was the expectation that they would help cover up young doctors' mistakes.) Interestingly, Nolen was subsequently excoriated both by surgeons, who believed he had betrayed the profession's secrets, and by the lay public, who felt he was celebrating the "callousness and prejudice" of surgeons toward vulnerable patients.
Perhaps the peak of this genre of scandalous tell-all accounts occurred in 1978, with the publication of The House of God, written by the pseudonymous Samuel Shem. Although fictional, the book draws on the author's raucous and racy experiences as a medical intern at Boston's Beth Israel Hospital. To Shem, medicine's whole approach to patient care was misguided. The book's hero, the Fat Man, teaches his trainees a vital lesson: "The delivery of medical care is to do as much nothing as possible."
Today it has become more fashionable than rebellious for physicians to describe the trials and tribulations of their training. Dozens of doctors (and some nurses) have published such accounts. Gawande is a prime example of this more mainstream type of physician-author. Even though he describes very disturbing events in his articles for The New Yorker (some of which have been reprinted in Complications), he uses his real name and that of his institution: Boston's Brigham and Women's Hospital.
Gawande, however, has taken the art of physician narrative to a new level. He is a deft writer, telling compelling stories that weave together medical events, his personal feelings and answers to questions that readers are surely pondering. Most important, Gawande paints with a decidedly gray brush. There are few heroes or villains in Complications, just folks doing their jobs. Although some readers, perhaps those who have felt victimized by the medical system, may find Gawande's explanations too exculpatory of doctors, he has documented well the uncertainties and ambiguities that characterize medical practice.
Take, for example, his chapter "When Doctors Make Mistakes." With great flair, Gawande describes a case in which he mistakenly did not call for help when treating a woman severely injured in a car accident. Although Gawande could not successfully place a breathing tube in her lungs, he stubbornly kept trying rather than paging an available senior colleague. Eventually, Gawande clumsily attempted an emergency procedure with which he had little experience, of cutting a hole in her windpipe and attempting to breathe for her. It was only through good fortune that the patient did not die or wind up with brain damage. An anesthesiologist, called in very late in the game, managed to sneak a child-size breathing tube into her windpipe, enabling the patient to obtain adequate oxygen.
With typical candor, Gawande lists the possible reasons that he did the wrong thing: "hubris, inattention, wishful thinking, hesitation, or the uncertainty of the moment." All doctors, he is arguing, experience these very human feelings as they tend to their craft. The fact that lives are at stake may make physicians--as compared with other professionals--even more prone to such emotions.
Gawande also details how the surgery department addressed his error. The case was presented at the weekly morbidity and mortality (M & M) conference, where physicians discuss deaths and other bad outcomes. "The successful M & M presentation," Gawande perceptively notes, "inevitably involves a certain elision of detail and a lot of passive verbs." This clearly occurred during the discussion of Gawande's case, where, remarkably, no one ever asked him why he did not call for help sooner. Rather, his blunder was later addressed through another ritual, a private discussion between Gawande and the senior attendant he had not called. Games with language and secret conversations: These are the reasons Gawande has written his book.
In another chapter, Gawande provides a more provocative explanation for the type of mistake he made. Gaffes, he argues in "Education of a Knife," are part of how surgeons--and other physicians--must learn their craft. (After all, physicians don't perform medicine, they practice it.) In an anecdote resembling that of my third-year student, Gawande describes how he routinely caused complications when learning to place dangerous central-line catheters into the necks of seriously ill patients. Expertise, he explains, does not just happen. Physicians in training must victimize a certain percentage of patients to acquire the skills they will need to become competent doctors. Should we consider these events to be mistakes or business as usual? Deciding how to define a medical error is not the least problem.
In such learning situations, the necessary experience is best attained by keeping quiet. Using the "physician's dodge," patients are told "You need a central line" but not "I am still learning to do this." One ramification of this type of learning, Gawande notes, is the victimization of poor, less educated patients, who are often incapable of questioning doctors. Medicine's inclination to learn on "the humblest of patients" becomes especially apparent with Gawande's candid admission that he himself chose a more senior physician--rather than a more attentive cardiology fellow--to care for his son's heart problem.
Mistakes may be made not only by physicians but by patients. In the chapter "Whose Body Is It, Anyway?" Gawande asks what physicians should do when patients seem to make bad decisions. One especially compelling story, which I often use to teach medical students, involves a man who absolutely refused to go on a breathing machine after experiencing a complication of gall bladder surgery. Although the doctors explained that artificial ventilation would only be temporary and would likely save his life, the patient continued to object.
When the man passed out due to lack of oxygen, Gawande was faced with a devastating quandary. Does he abide by the man's wishes, which is what doctors are supposed to do, or immediately put him on the ventilator? Gawande chose the latter. I love to ask students what they think the man said when, a few days later, Gawande triumphantly took him off the machine. Invariably, half of the students predict that the man said, "Call my lawyer." But the other half, who guess that he said "Thank you," are correct. Gawande had surely averted a mistake in this case, but he was left without clear guideposts for approaching similar cases in the future.
Complications is filled with other stories demonstrating the capriciousness of medicine. For example, Gawande once detected a case of the rare, often fatal infection necrotizing fasciitis (flesh-eating bacteria) because he happened to have seen a case a few weeks before. He ultimately saved the patient's life, not through hard, scientific evidence but through a gut feeling and a willingness to submit a patient to possibly unnecessary surgery. "Medicine's ground state," he concludes, "is uncertainty." Other chapters examine why the medical profession so often hides the mistakes of impairedphysicians, and the questionable use of an operation to help morbidly obese patients lose weight.
In the wake of the Institute of Medicine report, experts have proposed numerous remedies for the problem of error. Most attention has focused on a "systems approach," which would produce a "culture of safety" similar to that of the airline industry. In such a scheme, sophisticated computerized systems would be put in place to detect impending errors, such as wrong medication doses, sloppily written prescriptions and dangerous drug interactions. This emphasis aims to revamp the current approach to medical error, which encourages finger-pointing and malpractice lawsuits.
Gawande's book demonstrates both the advantages and limits of such a systems model. On the one hand, by discouraging the stigmatization of medical mistakes, physicians may be more willing to reveal their own errors and those of their peers. The notion that the case of the obstructed airway could be discussed in an open and nonjudgmental environment, rather than couched in secrecy, is altogether welcome.
On the other hand, there is a reason decades of exposés like Complications have not led to significant change. Defining errors and ascertaining their causes is a tricky business.
So is dealing with the issue of blame. Gawande is willing to admit that he screwed up when he did not call for immediate help for his deteriorating trauma patient. "Good doctoring is all about making the most of the hand you're dealt," he writes, "and I failed to do so." But many physicians remain reluctant to come quite so clean.
A friend and I were sitting around commiserating about the things that get to us: unloading small indignities, comparing thorns. "So there I was," she said, "sitting on the bus and this man across the aisle starts waving a copy of law professor Randall Kennedy's new book Nigger. He's got this mean-looking face with little raisiny eyes, and a pointy head, and he's taking this book in and out of his backpack. He's not reading it, mind you. He's just flashing it at black people."
"Don't be so touchy," I responded. "Professor Kennedy says that the N-word is just another word for 'pal' these days. So your guy was probably one of those muted souls you hear about on Fox cable, one of the ones who's been totally silenced by too much political correctness. I'd assume he was just trying to sign 'Have a nice day.'"
"Maybe so," she said, digging through her purse and pulling out a copy of Michael Moore's bestselling Stupid White Men. "But if I see him again, I'm armed with a 'nice day' of my own."
"That's not nice," I tell her. "Besides, I've decided to get in on the publishing boom myself. My next book will be called Penis. I had been going to title it Civil Claims That Shaped the Evidentiary History of Primogeniture: Paternity and Inheritance Rights in Anglo-American Jurisprudence, 1883-1956, but somehow Penis seems so much more concise. We lawyers love concision."
She raised one eyebrow. "And the mere fact that hordes of sweaty-palmed adolescents might line up to sneak home a copy, or that Howard Stern would pant over it all the way to the top of the bestseller list, or that college kids would make it the one book they take on spring break----"
"...is the last thing on my mind," I assured her. "Really, I'm just trying to engage in a scholarly debate about some of the more nuanced aspects of statutory interpretation under Rule 861, subsection (c), paragraph 2... And besides, now that South Park has made the word so much a part of popular culture, I fail to see what all the fuss is about. When I hear young people singing lyrics that use the P-word, I just hum along. After all, there are no bad words, just ungood hermeneutics."
"No wonder Oprah canceled her book club," she muttered.
Seriously. We do seem to have entered a weird season in which the exercise of First Amendment rights has become a kind of XXX-treme Sport, with people taking the concept of free speech for an Olympic workout, as though to build up that constitutional muscle. People speak not just freely but wantonly, thoughtlessly, mainlined from their hormones. We live in a minefield of scorched-earth, who-me-a-diplomat?, let's-see-if-this-hurts words. As my young son twirls the radio dial in search of whatever pop music his friends are listening to, it is less the lyrics that alarm me than the disc jockeys, all of whom speak as though they were crashing cars. It makes me very grateful to have been part of the "love generation," because for today's youth, the spoken word seems governed by people from whom sticks and stones had to be wrested when they were children--truly unpleasant people who've spent years perfecting their remaining weapon: the words that can supposedly never hurt you.
The flight from the imagined horrors of political correctness seems to have overtaken common sense. Or is it possible that we have come perilously close to a state where hate speech is the common sense? In a bar in Dorchester, Massachusetts, recently, a black man was surrounded by a group of white patrons and taunted with a series of escalatingly hostile racial epithets. The bartender refused to intervene despite being begged to do something by a white friend of the man. The taunting continued until the black man tried to leave, whereupon the crowd followed him outside and beat him severely. In Los Angeles, the head of the police commission publicly called Congresswoman Maxine Waters a "bitch"--to the glee of Log Cabin Republicans, who published an editorial gloating about how good it felt to hear him say that. And in San Jose, California, a judge allowed a white high school student to escape punishment after the student, angry at an African-American teacher who had suspended his best friend, scrawled "Thanks, Nigga" on a school wall. The judge was swayed by an argument that "nigga" is not the same as "nigger" but rather an inoffensive rap music term of endearment common among soul brothers.
Frankly, if Harvard president Lawrence Summers is going to be calling professors to account for generating controversy not befitting that venerable institution, the disingenuous Professor Kennedy would be my first choice. Kennedy's argument that the word "nigger" has lost its sting because black entertainers like Eddie Murphy have popularized it, either dehistoricizes the word to a boneheaded extent or ignores the basic capaciousness of all language. The dictionary is filled with words that have multiple meanings, depending on context. "Obsession" is "the perfume," but it can also be the basis for a harassment suit. Nigger, The Book, is an appeal to pure sensation. It's fine to recognize that ironic reversals of meaning are invaluable survival tools. But what's selling this book is not the hail-fellow-well-met banality of "nigger" but rather the ongoing liveliness of its negativity: It hits in the gut, catches the eye, knots the stomach, jerks the knee, grabs the arm. Kennedy milks this phenomenon only to ask with an entirely straight face: "So what's the big deal?"
The New Yorker recently featured a cartoon by Art Spiegelman that captures my concern: A young skinhead furtively spray-paints a swastika on a wall. In the last panel, someone has put the wall up in a museum and the skinhead is shown sipping champagne with glittery fashionistas and art critics. I do not doubt that hateful or shocking speech can be "mainstreamed" through overuse; I am alarmed that we want to. But my greater concern is whether this gratuitous nonsense should be the most visible test of political speech in an era when government officials tell us to watch our words--even words spoken in confidence to one's lawyer--and leave us to sort out precisely what that means.
Nearly four years have elapsed since that merry month of May when France and the whole world were taken aback by a sudden and momentous upheaval.
"It's a great mistake not to feel pleased when you have the chance," a rich, disfigured spinster advises a frail, well-mannered boy in The Shrimp and the Anemone, the first novel in L.P. Hartley's Eustace and Hilda trilogy. The boy has won a hand of piquet, and the spinster has noticed that he has difficulty
enjoying triumphs. Miss Fothergill (like many of Hartley's characters, the spinster has an outlandishly characteristic name) foresees that her 10-year-old friend may not have ahead of him many occasions of pleasure to waste.
Rather than disobey Miss Fothergill, I will readily admit that I have felt pleased while reading Eustace and Hilda, and very pleased while reading Hartley's masterpiece, The Go-Between. It was a spice to my pleasure that even though the Eustace and Hilda trilogy was first published between 1944 and 1947, and The Go-Between in 1953, I had not even heard of L.P. Hartley before the novels were reissued recently as New York Review Books Classics.
I blame my ignorance on an academic education. Hartley is not the sort of author discussed in schools. He is in no way postmodern. He is modern only in his frugality with sentiment and his somewhat sheepish awareness that the ideas of Marx and Freud are abroad in the world, rendering it slightly more tricky than it used to be to write unself-consciously about unathletic middle-class English boys who have been led by their fantasies and spontaneously refined tastes into the country homes of the aristocracy. If Hartley belongs to any academic canon, it would be to the gay novel, whose true history must remain unwritten until the theorists have been driven from the temple and pleasure-loving empiricists loosed upon the literary critical world.
Hartley belongs with Denton Welch and J.R. Ackerley. The three have different strengths: Welch is sensuous, Ackerley is funny and Hartley is a delicate observer of social machinery. But all are sly and precise writers, challenged by a subject inconvenient for novelizing: the emotional life of gay men.
They met the challenge with unassuming resourcefulness, writing what might be called fairy tales. Hans Christian Andersen was their pioneer, as the first modern writer to discover that emotions considered freakish and repellent in adults could win sympathy when expressed by animals and children. Andersen also discovered that a plain style was the best disguise for this kind of trickery and that the disgust of even the most intolerant readers could be charmed away by an invitation to learn how queer characters came to be the way they are. Thus in Ackerley, Welch and Hartley one finds gentle transpositions--from human to animal, from adulthood to childhood, from health to illness--disarmingly exact language and just-so stories about strange desires. Once upon a time, a man fell in love with another man's dog. Once upon a time, a boy on a bicycle was hit by a car and could not find pleasure again except in broken things. Once upon a time, a boy was made to have tea with a crooked-faced, dying woman, and to his surprise he liked her. The effect is a mood of tenderness; the stories are sweet and a bit mournful.
Hartley loved Hans Christian Andersen, but it was another writer who provided him with a defense of gentle transposition as a novelistic practice: Nathaniel Hawthorne, whose daguerreotype by Mathew Brady is the disconcertingly austere frontispiece of The Novelist's Responsibility, Hartley's 1967 collection of literary criticism. In the preface to The Blithedale Romance, Hawthorne had described the novelist's need for a "Faery Land, so like the real world, that in a suitable remoteness one cannot well tell the difference, but with an atmosphere of strange enchantment, beheld through which the inhabitants have a propriety of their own." Hartley quoted the passage with approval.
Lost time was Hartley's fairyland. "The past is a foreign country: they do things differently there," he wrote in the first, and most famous, sentence of The Go-Between. (He may have been echoing the first sentence of A Sentimental Journey, where Laurence Sterne had written, "They order...this matter better in France," which was Sterne's fairyland.) The remembered world could be as rich and vivid as the real one and yet would always stand at a remove. One could visit but not live there. As Hawthorne explained in his introduction to The Scarlet Letter, in another passage quoted by Hartley, there is something romantic about "the attempt to connect a bygone time with the very present which is flitting away from us."
The Go-Between opens with such an attempt. Leo Colston, a bachelor librarian in his 60s, has begun to sort his papers--apparently in preparation for his death, since he seems to have nothing else to look forward to. He starts by opening "a rather battered red cardboard collar-box." It is full of childhood treasures: "two dry, empty sea-urchins; two rusty magnets, a large one and a small one, which had almost lost their magnetism; some negatives rolled up in a tight coil; some stumps of sealing-wax; a small combination lock with three rows of letters; a twist of very fine whipcord; and one or two ambiguous objects, pieces of things, of which the use was not at once apparent: I could not even tell what they had belonged to." At the bottom of the box is a diary, and at first Colston cannot remember what the diary contains. Then he remembers why he does not want to remember it.
My secret--the explanation of me--lay there. I take myself much too seriously, of course. What does it matter to anyone what I was like, then or now? But every man is important to himself at one time or another; my problem had been to reduce the importance, and spread it out as thinly as I could over half a century. Thanks to my interment policy, I had come to terms with life...
A secret naturally arouses the reader's curiosity, but Colston's attitude toward his secret is a further provocation. The events in the diary, he implies, were both inconsequential and traumatic. He preferred a lifelong effort of forgetting over any attempt to come to terms; only by burying "the explanation of me" could he find a way to live. "Was it true...that my best energies had been given to the undertaker's art? If it was, what did it matter?" An unacknowledged wound, a buried definition of the self... The penumbra around Colston's secret is typical of a closeted homosexual, and yet what follows is neither a same-sex love story nor a coming-out narrative.
In the course of the novel, Colston does discover the facts of life and has at least an intuition of his oblique relation to them, but in The Go-Between Hartley was most intensely concerned with his hero's first experiences of sin and grace. This second, more surprising parallel with Hawthorne is the crucial one. Hartley once wrote that "Hawthorne thought that human nature was good, but was convinced in his heart that it was evil." Hartley was in a similar predicament.
Who would have guessed that the Edwardian sexual awakening of a delicate, precociously snobbish 13-year-old would have anything in common with the Puritan crimes and penitence that fascinated Hawthorne? Yet for Hartley, as for Hawthorne, the awareness of sin is a vital stage of education and a condition of maturity. At first young Leo Colston resists it. "It was like a cricket match played in a drizzle, where everyone had an excuse--and what a dull excuse!--for playing badly."
His moral code at the outset is the pagan one of schoolboys; he believes in curses and spells, and in triumphing over enemies by any means except adult intervention. But at the invitation of a classmate, Leo spends his summer vacation at Brandham Hall, a well-appointed Georgian mansion in Norfolk, and there his world is softened by love, in the person of the classmate's older sister, Marian. She is beautiful, musical and headstrong. Leo brings her messages from her fiancé, Hugh Winlove, Lord Trimingham, and billets from her lover, a local farmer named Ted Burgess. With her love comes sin--not because sexuality is evil, though it may be, but because after he has felt its touch, Leo can no longer think of the people he struggles with as enemies. The lovers make a terrible use of him, but he cares most about those who use him worst. In their triangle, he is incapable of taking a side; he is, after all, their go-between.
If you map Hartley onto Hawthorne too methodically, you arrive at the odd conclusion that Leo is part Chillingworth, part Pearl. This is not quite as silly as it sounds. Like them, Leo is jealous of the lovers he observes and is trapped in their orbit; nothing is lost on him, and he is unable to make emotional sense of what he knows. (His apprehension without comprehension is a boon for the reader, who through him sees the social fabric in fine focus.) But unlike Hawthorne's characters, Leo is a boy starting his adolescence, and that process, which he fears will defeat him, is at the heart of The Go-Between. Leo knows that the end of his childhood ought to be "like a death, but with a resurrection in prospect." His resurrection, however, is in doubt.
Like most fairy tales, the tale of how Leo becomes a fairy will not be fully credible to worldly readers. The Oedipal struggle will seem too bald, the catastrophe too absolute. Hartley was aware of this shortcoming. He knew that he found sexuality more awful than other people did, and in The Novelist's Responsibility, he wrote about his attempt to compensate for it while writing the Eustace and Hilda trilogy: "I remember telling a woman novelist, a friend of mine, about a story I was writing, and I said, perhaps with too much awe in my voice, 'Hilda is going to be seduced,' and I inferred that this would be a tragedy. I shall never forget how my friend laughed. She laughed and laughed and could not stop: and I decided that my heroine must be not only seduced, but paralysed into the bargain, if she was to expect any sympathy from the public."
Hartley's friend would probably have laughed at Hilda's paralysis, too. In the trilogy, Hilda is the older, stronger-willed sister of the exquisitely polite Eustace, who grows up in her shadow, a little too fond of its darkness. Their symbiosis in the first volume is brilliant and chilling, but her paralysis in the third is unconvincing. It is implausible that the demise of a love affair would literally immobilize an adult woman. Fortunately, it happens offstage, and a few of the book's characters do wonder if she is malingering.
However, the lack of perspective may be inextricable from Hartley's gifts. His writing is so mournful and sweet because he is willing to consider seriously terrors that only children ought to have, and perhaps only a man who never quite figured manhood out could still consider them that way. The second and third volumes of Eustace and Hilda are as elegant as the first, but not as satisfying, because Eustace's life becomes too vicarious to hold the reader's attention--and because the characters have grown up. Hartley's understanding of children is sophisticated, but he seems to have imagined adults as emotionally limited versions of them--as children who have become skilled at not thinking unpleasant thoughts. As a writer, his best moments are in describing terror at age 13 and the realization at 60-odd that one need not have been so terrified after all. In The Go-Between, artfully, the intervening years are compressed into the act of recollection, and the novel's structure fits the novelist's talents like a glove.
Perhaps time is our invention
To make things seem to move
Like the uncovering tail of the blue jay
As it lights its feet on the wet
Perhaps the seasons are really not
More than a single space with walls inside, disconnected
While fall and winter, and spring
Which we always anticipate, are only
Expansions of our own longings.
Perhaps there is only the now
Neither age nor youth, not even the vertigo of memories stilettoed
Except wounded into this present second
Shorter than the birth of a cell, or the nest dropped
With the sun and the rain always out together.
This center is absolute, it needs no endlessness
For heaven or hell. Or for creation, our own illusion of ourselves.
The minor variations we unfold are all the same
Inherently permutating at once
Repeating one design. Obscure. Lit at the edges of our time.
There are those opposed to the use of cloning technology to create human embryos for stem-cell research whose concerns emanate from commitments to social justice. One of their arguments runs as follows: The idea driving this medical research is that by creating an embryo through cloning, we can produce embryonic stem cells that are a perfect genetic match for a patient. All that is required to conduct the cloning is a skin cell from which to extract the patient's DNA and...a human egg.
Where, cry out the social justice advocates, are we going to get all these eggs for all these patients? Do the math, they suggest: 17 million American diabetics, needing anywhere from 10 to 100 eggs each, since the cloning technology is far from efficient...and even if you can pull that off, Christopher Reeve is still not walking, Michael J. Fox and Janet Reno still tremble and Ronald Reagan still doesn't remember who Ronald Reagan was. The social justice folk maintain that the billions of eggs required for embryonic stem cell therapies for the millions of Americans suffering from chronic and degenerative diseases will be obtained through exploitation of poor women in this country and the world over. Surplus value will take on an even more nefarious meaning.
Still, the early results from embryonic stem-cell therapy in mice are so dramatic that not to pursue this medical research is recognized as morally obscene and just plain stupid. At the University of California, Dr. Hans Keirstead was able to implant neurological tissue derived from embryonic stem cells in a mouse with partial spinal cord injury so that after eight weeks, the mouse had regained most of its ability to walk and, of major significance to the quarter-million Americans suffering from this tragic condition, had also regained bladder and bowel control. Yet, the question remains, where are we going to get all those eggs?
A call to Stanford University's Paul Berg, a Nobel laureate who has been testifying to Congress on behalf of embryonic stem-cell research, helps elucidate the answer: When it comes to the research, he says, the quantity required may not be a problem. But if the desired therapeutic potential of embryonic stem cells is fully realized, the need for eggs will be great and could short-circuit the availability of these therapies. But a solution to that may be possible, Berg insists. If research is carried out that identifies the biochemicals in the egg directing the genetic material to develop into an embryo, then we could extract and fractionate those biochemicals and insert them into any skin cell, for example, for use in the cloning process. Voilà! A skin cell becomes an egg, and skin cells are plentiful.
The immediate enthusiasm for this breakthrough scientific idea, which could help Reeve walk again while simultaneously obviating the motive for an exploitative human egg market, is quickly tempered by the full realization of what Berg has explained: When we acquire the ability to use any cell as an egg, we will have removed another obstacle to achieving complete control over human reproduction. Admittedly, complete control over the production of reproduction will require a womb for gestation--but that ultimately should prove to be just another biochemical matter for extraction and fractionation.
This, then, is how it goes in biotechnology, the essential dynamic that simultaneously gives rise to medical hope and moral vertigo. Each step forward produces a new problem, the solution to which demands further control over the biological mechanism known as a human being. But this somehow impinges on human beings or some portion of ourselves that we value. To deal with the attendant moral quandaries, a method is found to isolate and duplicate the underlying molecular process. The moral quandary has thus been replaced by an extracorporeal biochemical process, no longer strictly identified as human, and therefore a process that no one can reasonably value apart from its use. The problem, as bioethicist Eric Juengst puts it, is that we could thereby successfully cope with every moral dilemma posed by biotechnology and still end up with a society none of us would wish to live in. For Francis Fukuyama, this is Our Posthuman Future, as he has titled his new book on the subject.
Fukuyama's most famous previous theoretical foray was to declare, in 1989, an end to history, whereby a capitalist liberal democratic structure represented the final and most satisfying endpoint for the human species, permitting the widest expression of its creative energies while best controlling its destructive tendencies. He imagined that ultimately, with the universal acceptance of this regime, the relativist impasse of modern thought would in a sense resolve itself.
But thirteen years after the end of history, Fukuyama has second thoughts. He's discovered that there is no end of history as long as there is no end of science and technology. With the rapidly developing ability of the biological sciences to identify and then alter the genetic structure of organisms, including humans, he fears the essence of the species is up for grabs. Since capitalist liberal democratic structures serve the needs of human nature as it has evolved, interference by the bio-engineers with this human nature threatens to bring the end of history to an end.
The aim of Our Posthuman Future is "to argue that [Aldous] Huxley was right," Fukuyama announces early on, referring to Huxley's 1932 vision of a Brave New World. Multiple meanings are intended by Fukuyama: The industrialization of all phases of reproduction. The genetic engineering of the individuals produced by that process, thereby predetermining their lives. The tyrannical control of this population through neurochemical intervention, making subservience experientially pleasurable. Fukuyama cites specific contemporary or projected parallels to Huxley's Hatchery and Conditioning Center, Social Predestination Room and soma. In Fukuyama's terms, the stakes in these developments are nothing less than human nature itself.
The first of the book's three parts lays out the case that the biotechnologically driven shift to a posthuman era is already discernible and describes some of the potential consequences. Prozac and Ritalin are precursors to the genomically smart psychotropic weapons of the near future. Through these drugs, which energize depressed girls and calm hyperactive boys, we are being "gently nudged toward that androgynous median personality, self-satisfied and socially compliant, that is the current politically correct outcome in American society." Standardization of the personality is under way. This is the area to watch, Fukuyama asserts, because virtually everything that the popular imagination envisions genetic engineering accomplishing is much more likely to be accomplished sooner through neuropharmacology.
Increased life spans and genetic engineering also offer mostly dystopic horizons, whereby gerontocracies take power over societies whose main purpose has become the precision breeding of their progeny. The ancient instincts for hierarchical status and dominance are still the most powerful forces shaping this new world born from biotechnology. Since, as Fukuyama sees it, science does not necessarily lead to the equality of respect for all human beings demanded by liberal egalitarianism, the newest discoveries will serve the oldest drives. We are launched on a genetic arms race.
But be warned: We may not arrive in that new world through some dramatic struggle in which we put up a fight. Rather, the losses to our humanity may occur so subtly that we might "emerge on the other side of a great divide between human and posthuman history and not even see that the watershed had been breached because we lost sight of what that [human] essence was."
If this terrible event is to be prevented, then the human essence, which Fukuyama correlates with human nature itself, must be identified and kept inviolable. But what is that line to be drawn around "human nature" and to which we can all adhere so that we might reap the benefits of biotechnology while preventing the nightmare scenarios from ever coming to pass?
The entire world today wants the answer to this. Fukuyama promises to deliver it. But despite the clarity with which he announces his mission, the author advises his readers, "Those not inclined to more theoretical discussions of politics may choose to skip over some of the chapters here." Yet these are the very chapters containing the answer we all seek in order to tame the biotechnology beast! This, then, signals that we are entering dangerous ground, and we will need to bear with the author's own means of revealing his great discovery, which may be skipped over at our own peril.
In this heart of the book, titled "Being Human," Fukuyama first seeks to restore human nature as the source of our rights, our morality and our dignity. In particular, he wishes to rescue all these dimensions from the positivist and utilitarian liberal philosophers who, closely allied with the scientific community, have dominated the debate over biotechnology. According to the author, these philosophers assign rights everywhere and emphasize the individual as the source of moral concern. In doing so, they put humankind and its collective life at risk before the juggernaut of biotechnology. John Rawls and Ronald Dworkin, among others, have elevated individual autonomy over inherently meaningful life plans, claims Fukuyama, who then questions whether moral freedom as it is currently understood is such a good thing for most people, let alone the single most important human good.
Rather than our individual autonomy or moral freedom, Fukuyama wishes that we would attend to the logic of human history, which is ultimately driven by the priorities that exist among natural human desires, propensities and behaviors. Since he wishes us to shift ground to the logic of the inherent and the natural, he must finally define that core composing human nature:
The definition of the term human nature I will use here is the following: human nature is the sum of the behavior and characteristics that are typical of the human species, arising from genetic rather than environmental factors.
Later he will refine this further to the innate species-typical forms of cognition, and species-typical emotional responses to cognition. What he is really after is not just that which is typical of our species but that which is unique to human beings. Only then will we know what needs the greatest safeguarding. After hanging fire while reviewing the candidates for this irreducible, unique core to be defended, including consciousness and the most important quality of a human being, feelings, Fukuyama finally spills the beans:
What is it that we want to protect from any future advances in biotechnology? The answer is, we want to protect the full range of our complex, evolved natures against attempts at self-modification. We do not want to disrupt either the unity or the continuity of human nature, and thereby the human rights that are based on it.
So, where are we? It would seem we have gone full circle. Human nature is defined by...human nature! To the extent that it is capable of being located in our material bodies, it is all that arises from our genetics. Any attempt at greater precision is a violation of our unity or continuity--and threatens to expose the author's empty hand. Through such sophistry, Fukuyama wishes to assert mastery over any biotechnological innovation that he considers threatening, since he can now arbitrarily choose when it is disruptive of the unity or continuity of the human nature arising from our genetics. Even a heritable cancer could qualify for protection under Fukuyama's rubric for that which is to be defended from biotechnological intervention.
Indeed, there are those agreeing with Fukuyama's view of the biological bases of human social life who draw opposite conclusions about human bioengineering, viewing it as humanity's last best hope.
The remainder of the book is a potpourri of tactical suggestions (embedded in rhetoric cloned from Fukuyama's mentor in these matters, bioethicist Leon Kass) of which biotechnologies should be controlled, and of the need for both national and international bodies and systems to do so, if such control is to be effective. That, in the end, may be the most surprising aspect of the book. All this fervid philosophizing in reaction to fears about a Brave New World, fervently working toward the radical conclusion that what is needed is...regulation. Although obviously recognition of the need for regulation might well be experienced as a radical trauma by someone who has previously placed an overabundance of faith in the market.
But one would be foolish to believe that Fukuyama has gone all this distance simply to argue for what he refers to at one point as a more nuanced regulatory approach. In his most public engagement with biotechnology thus far, he has endorsed, written and testified to Congress on behalf of a bill that will not only ban human reproductive cloning but also ban nonreproductive cloning for stem-cell research. The legislation he supports would also make any doctor who utilizes or prescribes a treatment developed with cloning technology subject to ten years in prison and a $1 million fine. Under this legislation, then, if a cure or treatment for diabetes or heart failure is created in England that used embryo cloning to harvest stem cells for therapy, US physicians would not be allowed to have access to such treatments for their patients. This is his lesson in how moral freedom is not such a good thing compared with an inherently meaningful life plan. Let the fragile diabetic or spinal cord-injury victim learn the true value of our human nature from their catheterized bladders!
Fukuyama's entire brief depends upon avoiding the consequences of his own logic. Having identified the human essence with our biological human nature, he must evade any further specification or else the particular tissues, cells or molecules would be subject to further discussion and analysis as to whether or not they represent the human essence. Rather than discussion, we should trade in our autonomy and moral freedom for his protections. By the close of the book, any moral qualms on his part fall entirely by the wayside. Fukuyama is perhaps aware that he has failed to make his case except to those ready to believe. The book culminates in a final paragraph that is nothing less than a temper tantrum:
We do not have to accept any of these future worlds under a false banner of liberty, be it that of unlimited reproductive rights or of unfettered scientific inquiry. We do not have to regard ourselves as slaves to inevitable technological progress when that progress does not serve human ends. True freedom means the freedom of political communities to protect the values they hold most dear...
Nice rhetoric until we recall the values of the types of political regimes to which moral freedom and science must be sacrificed. While Fukuyama rails against the Brave New World, he takes the side of Huxley's World Controller, who explains, "Truth's a menace, science is a public danger...That's why we so carefully limit the scope of its researches."
There is an alternative to the fear that human nature must be inviolable because human nature cannot be trusted. We have seen imperious dictates against science and moral freedom delivered by philosophers before. In the recent past, we have evidence of very similar ideas in very similar language issuing from the philosopher whom Fukuyama draws upon for the epigraph beginning the first chapter of his book, Martin Heidegger. In the 1930s Professor Heidegger wanted science to serve the German essence, and it did. Now Professor Fukuyama wants science, and all of us, to serve the human essence, which he equates with his version of sociobiology infused with German romantic holism. Once more, we witness someone who would stop tyranny by imposing a tyranny of his own. Since Francis Fukuyama now sits on the President's Council on Bioethics, we should be grateful for the warning.
"Thirty years from now the big university campuses will be relics," business "guru" Peter Drucker proclaimed in Forbes five years ago. "It took more than 200 years for the printed book to create the modern school. It won't take nearly that long for the [next] big change." Historian David Noble echoes Drucker's prophecies but awaits the promised land with considerably less enthusiasm. "A dismal new era of higher education has dawned," he writes in Digital Diploma Mills. "In future years we will look upon the wired remains of our once great democratic higher education system and wonder how we let it happen."
Most readers of this magazine will side with Noble in this implicit debate over the future of higher education. They will rightly applaud his forceful call for the "preservation and extension of affordable, accessible, quality education for everyone" and his spirited resistance to "the commercialization and corporatization of higher education." Not surprisingly, many college faculty members have already cheered Noble's critique of the "automation of higher education." Although Noble himself is famously resistant to computer technology, the essays that make up this book have been widely circulated on the Internet through e-mail, listservs and web-based journals. Indeed, it would be hard to come up with a better example of the fulfillment of the promise of the Internet as a disseminator of critical ideas and a forum for democratic dialogue than the circulation and discussion of Noble's writings on higher education and technology.
Noble performed an invaluable service in publishing online the original articles upon which this book is largely based. They helped initiate a broad debate about the value of information technology in higher education, about the spread of distance education and about the commercialization of universities. Such questions badly need to be asked if we are to maintain our universities as vital democratic institutions. But while the original essays were powerful provocations and polemics, the book itself is a disappointing and limited guide to current debates over the future of the university.
One problem is that the book has a dated quality, since the essays are reproduced largely as they were first circulated online starting in October 1997 (except for some minor editorial changes and the addition of a brief chapter on Army online education efforts). In those four-plus years, we have watched the rise and fall of a whole set of digital learning ventures that go unmentioned here. Thus, Noble warns ominously early in the book that "Columbia [University] has now become party to an agreement with yet another company that intends to peddle its core arts and science courses." But only in a tacked-on paragraph in the next to last chapter do we learn the name of the company, Fathom, which was launched two years ago, and of its very limited success in "peddling" those courses, despite Columbia president George Rupp's promise that it would become "the premier knowledge portal on the Internet." We similarly learn that the Western Governors' Virtual University "enrolled only 10 people" when it opened "this fall" (which probably means 1998, when Noble wrote the original article) but not that the current enrollment, as of February 2002, is 2,500. For the most part, the evidence that Noble presents is highly selective and anecdotal, and there are annoyingly few footnotes to allow checking of sources or quotes.
The appearance of these essays with almost no revision from their initial serial publication on the Internet also helps to explain why Noble's arguments often sound contradictory. On page 36, for example, he may flatly assert that "a dismal new era of higher education has dawned"; but just twenty-four pages later, we learn that "the tide had turned" and the "the bloom is off the rose." Later, he reverses course on the same page, first warning that "one university after another is either setting up its own for-profit online subsidiary or otherwise working with Street-wise collaborators to trade on its brand name in soliciting investors," but then acknowledging (quoting a reporter) that administrators have realized "that putting programs online doesn't necessarily bring riches." When Noble writes that "far sooner than most observers might have imagined, the juggernaut of online education appeared to stall," he must have himself in mind, two chapters earlier. Often, Noble is reflecting the great hysteria about online education that swept through the academy in the late 1990s. At other times (particularly when the prose has been lightly revised), he indicates the sober second thoughts that have more recently emerged, especially following the dot-com stock market crash in early 2000.
In the end, one is provided remarkably few facts in Digital Diploma Mills about the state of distance education, commercialization or the actual impact of technology in higher education. How many students are studying online? Which courses and degrees are most likely to appear online? How many commercial companies are involved in online education? To what degree have faculty employed computer technology in their teaching? What has been the impact on student learning? Which universities have changed their intellectual property policies in response to digital developments? One searches in vain in Noble's book for answers, or even for a summary of the best evidence currently available.
Moreover, Noble undercuts his own case with hyperbole and by failing to provide evidence to support his charges. For example, most readers of his book will not realize that online distance education still represents a tiny proportion of college courses taken in the United States--probably less than 5 percent. Noble sweepingly maintains, "Study after study seemed to confirm that computer-based instruction reduces performance levels." But he doesn't cite which studies. He also writes, "Recent surveys of the instructional use of information technology in higher education clearly indicate that there have been no significant gains in pedagogical enhancement." Oddly, here Noble picks up the rhetoric of distance-education advocates who argue that there is "no significant difference" in learning outcomes between distance and in-person classes.
Many commentators have pointed out Noble's own resistance to computer technology. He refuses to use e-mail and has his students hand-write their papers. Surely, there is no reason to criticize Noble for this personal choice (though one feels sorry for his students). Noble himself responds defensively to such criticisms in the book's introduction: "A critic of technological development is no more 'anti-technology' than a movie critic is 'anti-movie.'" Yes, we do not expect movie critics to love all movies, but we do expect them to go to the movies. Many intelligent and thoughtful people don't own television sets, but none of them are likely to become the next TV critic for the New York Times. Thus, Noble's refusal to use new technology, even in limited ways, makes him a less than able guide to what is actually happening in technology and education.
Certainly, Noble's book offers little evidence of engagement with recent developments in the instructional technology field. One resulting distortion is that some readers will think that online distance education is the most important educational use of computer technology. Actually, while very few faculty teach online courses, most have integrated new technology into their regular courses--more than three-fifths make use of e-mail; more than two-fifths use web resources, according to a 2000 campus computing survey. And few of these faculty members can be characterized, as Noble does in his usual broad-brush style, as "techno-zealots who simply view computers as the panacea for everything, because they like to play with them."
Indeed, contrary to Noble's suggestion, some of the most thoughtful and balanced criticisms of the uses of technology in education have come from those most involved with its application in the classroom. Take, for example, Randy Bass, a professor of English at Georgetown University, who leads the Visible Knowledge Project (http://crossroads.georgetown.edu/vkp), a five-year effort to investigate closely whether technology improves student learning. Bass has vigorously argued that technological tools must be used as "engines of inquiry," not "engines of productivity." Or Andrew Feenberg, a San Diego State University distance-education pioneer as well as a philosopher and disciple of Herbert Marcuse, who has insisted that educational technology "be shaped by educational dialogue rather than the production-oriented logic of automation," and that such "a dialogic approach to online education...could be a factor making for fundamental social change."
One would have no way of knowing from Noble's book that the conventional wisdom of even distance-education enthusiasts is now that cost savings are unlikely, or that most educational technology advocates, many of them faculty members, see their goal as enhancing student learning and teacher-student dialogue. Noble, in fact, never acknowledges that the push to use computer technology in the classroom now emanates at least as much from faculty members interested in using these tools to improve their teaching as it does from profit-seeking administrators and private investors.
Noble does worry a great deal about the impact of commercialization and commodification on our universities--a much more serious threat than that posed by instructional technology. But here, too, the book provides an incomplete picture. Much of Noble's book is devoted to savaging large public and private universities--especially UCLA, which is the subject of three chapters--for jumping on the high-technology and distance-education bandwagons. Yet at least as important a story is the emergence of freestanding, for-profit educational institutions, which see online courses as a key part of their expansion strategy. For example, while most people think of Stanley Kaplan as a test preparation operation, it is actually a subsidiary of the billion-dollar Washington Post media conglomerate and owns a chain of forty-one undergraduate colleges and enrolls more than 11,000 students in a variety of online programs, ranging from paralegal training to full legal degrees at its Concord Law School, which advertises itself as "the nation's only entirely online law school." This for-profit sector is growing rapidly and becoming increasingly concentrated in a smaller number of corporate hands. The fast-growing University of Phoenix is now the largest private university in the United States, with more than 100,000 students and almost one-third in online programs, which are growing more than twice as fast as its brick-and-mortar operation. Despite a generally declining stock market, the price of the tracking stock for the University of Phoenix's online operation has increased more than 80 percent in the past year.
As the Chronicle of Higher Education reported last year, "consolidation...is sweeping the growing for-profit sector of higher education," fueled by rising stock prices in these companies. This past winter, for example, Education Management Corporation, with 28,000 students, acquired Argosy Education Group and its 5,000 students. The threat posed by these for-profit operations is rooted in their ability to raise money for expansion through Wall Street ("Wall Street," jokes the University of Phoenix's John Sperling, "is our endowment") and by diminishing public support for second-tier state universities and community colleges (the institutions from which for-profits are most likely to draw new students). Yet, except for an offhand reference to Phoenix, Digital Diploma Mills says nothing about these publicly traded higher-education companies. And these for-profit schools are actually only a small part of the more important and much broader for-profit educational "sector," which is also largely ignored by Noble and includes hundreds of vendors of different products and services, and whose size is now in the hundreds of billions of dollars--what Morgan Stanley Dean Witter calls, without blushing, an "addressable market opportunity at the dawn of a new paradigm."
A strong cautionary tale is provided by Noble, that of the involvement of UCLA's extension division with a commercial company called Onlinelearning.net--the most informative chapter in the book. He shows how some UCLA administrators as early as 1993 greedily embraced a vision of riches to be made in the online marketing of the college's extension courses. UCLA upper management apparently bought the fanciful projections of their commercial partners that the online venture would generate $50 million per year within five years, a profit level that quickly plummeted below $1 million annually. But Noble conflates the UCLA online-extension debacle with a more benign effort by the UCLA College of Letters and Sciences, beginning in 1997, to require all instructors to post their course syllabuses on the web. He seems unwilling to draw distinctions between the venal and scandalous actions of top UCLA administrators and the sometimes ham-handed efforts of other administrators to get UCLA faculty to enhance their classes by developing course websites, a fairly common educational practice and a useful convenience for students. Three-fifths of UCLA students surveyed said that the websites had increased interactions with instructors, and social science faculty recently gave the website initiative a mostly positive evaluation.
Sounding an "early alarm" so that faculty members can undertake "defensive preparation and the envisioning of alternatives" is how Noble explains his purpose in writing Digital Diploma Mills. But will faculty be well armed if they are unaware of the actual landscape they are traversing? In the end, Noble leaves us only with a deep and abiding suspicion of both technology and capitalism. His analysis of technology and education does echo Marx's critique of capitalism, with its evocation of concepts like commodification, alienation, exchange and labor theories of value. But unlike Marx, who produced a critical analysis of the exploitative nature of early capitalist production without outright rejection of the technology that made industrialization possible, Noble cannot manage the same feat.
In the current political climate, Noble's undifferentiated suspicion of technology hinders us more than it helps us. Are we prepared to follow him in his suspicion of any use of technology in higher education? Are faculty members willing to abjure e-mail in communicating with their students and colleagues? Are instructors at small colleges with limited library collections prepared to tell their students not to use the 7 million online items in the Library of Congress's American Memory collection? Are they ready to say to students with physical disabilities that limit their ability to attend on-campus classes or conduct library research that they can't participate in higher education? Are faculty at schools with working adults who struggle to commute to campus prepared to insist that all course materials be handed directly to students rather than making some of it available to their students online?
Similarly, what lines are we prepared to draw with respect to commercialization of higher education within the capitalist society in which we live? Are faculty willing to abandon publishing their textbooks with large media conglomerates and forgo having their books sold through nationwide bookstore chains? Are they prepared to say to working-class students who view higher education as the route to upward mobility that they cannot take courses that help them in the job market?
Noble's answer to most of these questions would undoubtedly be yes, insisting, as he does, that anything less than the "genuine interpersonal interaction," face to face, undermines the sanctity of the essential teacher-student relationship. In a March 2000 Chronicle of Higher Education online dialogue about his critique of technology in education, Noble complained that no one had offered "compelling evidence of a pedagogical advantage" in online instruction. (He pristinely refused to join online, and had a Chronicle reporter type in his answers relayed over the phone.) A student at UCLA, who had unexpectedly taken an online course, noted in her contribution to the Q&A that because she tended to be "shy and reserved," e-mail and online discussion groups allowed her to speak more freely to her instructor, and that she thought she retained more information in the online course than in her traditional face-to-face classes at UCLA. Noble rejected the student's conclusion that the online course had helped her find her voice, arguing that writing was "in reality not a solution, but an avoidance of the difficulty." "Speaking eloquently, persuasively, passionately," he concluded, "is essential to citizenship in a democracy." Putting aside the insensitivity of Noble's reply, his position, as Andrew Feenberg points out in Transforming Technology: A Critical Theory Revisited, is reminiscent of Plato's fear that writing (the cutting-edge instructional technology in the ancient world) would replace spoken discourse in classical Greece, thus destroying the student-teacher relationship. (Ironically, as Feenberg also notes, "Plato used a written text as the vehicle for his critique of writing, setting a precedent" for current-day critics of educational technology like Noble who have circulated their works on the Internet.)
The conservative stance of opposing all change--no technology, no new modes of instruction--is appealing because it keeps us from any possible complicity with changes that undercut existing faculty rights and privileges. But opposition to all technology means that we are unable to support "open source" technological innovations (including putting course materials online free) that constitute a promising area of resistance to global marketization. And it makes it impossible to work for protections that might be needed in a new environment. Finally, it leaves unchanged the growing inequality between full-time and part-time faculty that has redefined labor relations in the contemporary university--the real scandal of the higher-education workplace. Without challenging the dramatic differences in wages and workloads of full professors and adjunct instructors, faculty rejection of educational technology begins to remind us of the narrow privileges that craft workers fought to maintain in the early decades of industrial capitalism at the expense of the unskilled workers flooding into their workplaces.
We prefer to work from a more pragmatic and realistic stance that asks concretely about the benefits and costs of both new technology and new educational arrangements to students, faculty (full- and part-time) and the larger society. Among other things, that means that academic freedom and intellectual property must be protected in the online environment. And the faculty being asked to experiment with new technology need to be provided with adequate support and rewards for their (ad)ventures. As the astute technology commentator Phil Agre wrote when he first circulated Noble's work on the Internet, "the point is neither to embrace or reject technology but to really think through and analyze...the opportunities that technology presents for more fully embodying the values of a democratic society in the institutions of higher education."
Facebook Like Box