News and Features
On the occasion of its fiftieth anniversary, the so-called forgotten war was finally remembered. With the Associated Press's Pulitzer Prize-winning "revelation" a year ago that hundreds of civilians were massacred under a concrete bridge outside the village of Nogun-ri, and the recent "uncovering" of the execution of dozens of leftists by the South Korean Army before the battle of Taejon, the horrors of the Korean War are beginning to come to light.
To the survivors and witnesses of these tragedies, however, the truth of their experiences was never in question. Their remembrances were repressed in a variety of ways--by government authorities, who denied these events ever happened; by society at large, which wanted to forget the past and move on; by family members and friends, who did not want to hear about such painful things; or even by themselves, who held these memories inside for almost fifty years. As a result, many have never spoken of what they witnessed during the three-year conflict, in which more than a million Koreans and tens of thousands of US troops died. The Korean War continues--in the lives of survivors and in reality; no peace treaty was ever signed, only an armistice agreement in 1953. Hence the enormity of headlines this past June when leaders from the two Koreas held a summit meeting for the first time since the Korean War.
In her memoir, Ten Thousand Sorrows: The Extraordinary Journey of a Korean War Orphan, Elizabeth Kim tells of another forgotten legacy of the war. The daughter of a Korean mother and an American GI, Kim's curly hair and hazel eyes branded her as an outcast in Korean society, a honhyol--"a despicable name that meant nonperson, mixed race, animal." In a culture where patriarchal bloodlines form the basis of the most important structure in society--the family--mixed-race children were (and still are, in many cases) not tolerated. Kim writes, "National pride is deeply ingrained, and in Korea the intense love for the country's heritage and traditions has its darker side of hatred for anything that taints the purity of that heritage."
Kim begins her moving yet vague memoir with the horrific "honor killing" of her beloved Omma (mother) by her own grandfather and uncle, an act she claims to have witnessed as a young child while hiding in a basket. Omma had brought great shame to her family, many of whom were village elders, by producing a honhyol. She also had the audacity to refuse the generous offer of another family to allow her child to work in their home as a servant--a higher status than that of a half-breed. In her relatives' eyes, the family's honor could only be saved by Omma's death.
A sympathetic aunt leaves Kim, somewhere between the age of 4 and 6 at the time (or maybe even younger), at a Christian orphanage without any components of her identity: "In a Korean's view, it would be better to be dead than to be the embodiment of shame such as I was: a honhyol, a female, nameless, without a birth date." Behind the bars of a crib, she is sustained by memories of her mother's love. "Omma told me that somewhere in the world it would be possible for me to become a person. She explained her Buddhist belief that life was made up of ten thousand joys and ten thousand sorrows, and all of them were stepping-stones to ultimate peace."
Kim relays her litany of sorrow in spare, poetic prose and never succumbs to self-pity or sentimentality. Her hellish existence as a nonperson continues in the squalid orphanage and even after her adoption by a white fundamentalist pastor and his dutiful wife from central California. Like many Korean adoptees of the time, Kim found herself in a community without any other Asians or people of color. Instead of being stigmatized by her Caucasian features, as she was in Korea, she was tormented because of her Asian ones in the rural desert community where she grew up. Her ultra-Christian parents reared her according to the edict of assimilation, never allowing Kim to speak of Korea or her birth mother. Instead, they openly disparaged the only person who showed her love: "My parents told me she was something very bad and sinful called a prostitute. She didn't love me, they said; it didn't matter to her whether I lived or died."
Kim carries the stigma of the honhyol well into her adult life, as her sorrows multiply. A loveless arranged marriage to a deacon in her parents' church follows her traumatic childhood, as do years of physical and psychological abuse. Mercifully, joy does make an appearance in this wrenching memoir, in the form of her daughter, Leigh. Kim finds the strength to spirit away the newborn from her schizophrenic husband before her daughter witnesses the abuse her mother has endured for years.
In the book's most affecting sections, the author describes her brief yet loving relationship with Omma, in which the two outcasts create a private world of their own in a shack just outside their village (a portion of this was excerpted in the first issue of Oprah Winfrey's magazine, O). Decades later, the next generation of mother and daughter also live in poverty on the outskirts of a small town and find happiness through stories and fantasy. "Whether in Korea or in America, the make-believe tapestry made life bearable."
But if fantasy was responsible for Kim's happiness, was it for her despair, as well? Just as the sources for the AP's story have come under question--in particular, one US soldier who originally admitted to getting the order to shoot civilians was not even in the vicinity of Nogun-ri at the time--Kim's story has come under scrutiny as well.
In September the Korea Herald, an English-language newspaper based in Seoul, published a letter to the editor titled "'Memoir' defames Korean culture." The author, Brian Myers, a Korean studies scholar in the United States, sharply criticized Ten Thousand Sorrows as "wildly inaccurate" in its descriptions of "Korean life, language and custom." He went on to write that Kim's "account of the Confucian 'honor killing' is so improbable, that the only question for me is whether she herself believes what she has written." Some in Korean studies have pointed out that it would have been more common in Korean culture at the time for a mother to have committed suicide than to have been murdered by members of her own family.
Answering such criticisms, Kim's publisher recently issued a carefully worded press release stating that "there are not sufficient studies for Ms. Kim and Doubleday to have stated as an established fact that there is a tradition of honor killings in Korea." Doubleday subsequently promised to delete the offending term in future paperback editions of Ten Thousand Sorrows. Kim, a longtime journalist, admitted to Associated Press reporter Hillel Italie that she was "careless" both in using the term "honor killing" (which is found primarily in Muslim cultures) and in stating in an admittedly "bad bit of writing" that Korea was divided by the Korean War, when in fact it was split years earlier, in 1945, after the country's liberation from Japan.
Considering Kim's background (she was most recently an editor at the Marin Independent Journal), her "bad bits of writing" are inexcusable and regrettable. In short, she should have known better. But as she has stated, her book "is not intended to be representative of Korean adoption or anything else. It's just my life." Kim's critics are quick to dismiss her account because of errors and inconsistencies; they point to her six-figure advance as motivation for sensationalizing the truth. But as any seasoned reader of memoirs knows, the genre tends toward self-reflection rather than historicity or definitiveness in describing a specific culture or experience.
Kim's critics forget, too, that the basis of any memoir is memory, which is by its very nature slippery, fragmented and often unreliable. What Kim is most guilty of in Ten Thousand Sorrows is not misrepresentation but neglecting to describe adequately the state and processes of her own memory. As a result, the book feels unfinished, like a work in progress, especially in the last sections, where it devolves into shards of self-help homilies. The book would have benefited greatly from a discussion of how the author's early-childhood recollections coalesced in her brain over time and why she chose to believe the version of what happened to her that she devoted to print. The book's unsatisfying ending suggests that perhaps the author hadn't quite achieved the distance necessary to deal with such questions when she wrote the book.
In his letter to the Korea Herald, Myers questioned whether Kim believes what she has written, implying that the author might be guilty of willful misrepresentation. The same charge has been leveled at the civilians and US servicemen who witnessed what happened at Nogun-ri. Is the inherent haziness of memory (especially that of the Korean War, half a century ago) enough reason to deny the actuality of events? If one thinks so--even in the afterglow of Kim Dae Jung and Kim Jong Il's first summit meeting--the wounds of the last battle of the cold war will never heal.
John Lennon once characterized his wife, Yoko Ono, as the world's "most famous unknown artist. Everybody knows her name, but nobody knows what she does." What she was famous for, of course, was him. The art for which she was unknown could not conceivably have made her famous--although even the most famous of artists would be obscure relative to the aura of celebrity surrounding the Beatle of Beatles and his bride. Yoko Ono had been an avant-garde artist in New York and Tokyo in the early 1960s, and part of an avant-garde art world itself very little known outside its own small membership. The most robust of her works were subtle and quiet to the point of near-unnoticeability. One of her performances consisted, for example, of lighting a match and allowing it to burn away. One of her works, which she achieved in collaboration with the movement known as Fluxus, consisted of a small round mirror which came in an envelope on which YOKO ono/self portrait was printed. It belonged in Fluxus I--a box of works by various Fluxus artists, assembled by the leader and presiding spirit of the movement, George Maciunas. But the contents of Fluxus I were themselves of the same modest order as Self Portrait. We are not talking about anything on the scale, say, of Les Demoiselles d'Avignon. We are speaking of things one would not see as art unless one shared the values and ideologies of Fluxus.
Fluxus, in that phase of its history, was much concerned with overcoming the gap between art and life, which was in part inspired by John Cage's decision to widen the range of sounds available for purposes of musical composition. Cage's famous 4'33'' consisted of all the noises that occurred through an interval in which a performer, sitting at the piano, dropped his or her hands for precisely that length of time. A typical Fluxus composition was arrived at by selecting a time--3:15, say--from the railway timetable and considering all the sounds in the railway station for three minutes and fifteen seconds as the piece. As early as 1913, Marcel Duchamp made works of art out of the most aesthetically undistinguished vernacular objects, like snow shovels and grooming combs, and he was in particular eager to remove all reference to the artist's eye or hand from the work of art. "The intention," he told Pierre Cabanne in 1968, "consisted above all in forgetting the hand." So a cheap, mass-produced object like a pocket mirror could be elevated to the rank of artwork and be given a title. How little effort it takes to make a self-portrait! In The Republic Socrates made the brilliant point that if what we wanted from art was an image of visual reality, what was the objection to holding a mirror up to whatever we wished to reproduce? "[You] will speedily produce the sun and all the things in the sky, and the earth and yourself and the other animals and implements and plants." And all this without benefit of manual skill!
Fluxus made little impact on the larger art world of those years. I encountered it for the first time in 1984, at an exhibition held at the Whitney Museum of New York in which the art made in New York in the period between 1957 and 1964 was displayed. It was a show mainly of Pop Art and Happenings; and there were some display cases of Fluxus art, many of them objects of dismaying simplicity relative to what one expected of works of art in the early 1960s, exemplified by large heroic canvases with churned pigment and ample brush sweeps. Maciunas spoke of Fluxus as "the fusion of Spike Jones, vaudeville, gag, children's games and Duchamp"; and the display cases contained what looked like items from the joke shop, the children's counter in the dime store, handbills and the like. Ono's relationship to Fluxus is a matter of delicate art-historical analysis, but if she fit in anywhere, it would have been in the world Maciunas created around himself, where the artists and their audience consisted of more or less the same people. It was a fragile underworld, easy not to know about. Ono's work from that era has the weight of winks and whispers.
So, it was as a largely unknown artist that Lennon first encountered her, at the Indica Gallery in London, in 1966. The point of intersection was a work titled YES Painting, which consists of a very tiny inscription of the single word Yes, written in india ink on primed canvas, hung horizontally just beneath the gallery's ceiling. The viewer was required to mount a stepladder, painted white, and to look at the painting through a magnifying lens, suspended from the frame. It was part of the work, as it was of much of Yoko Ono's art, then and afterward, that it required the participation of the viewer in order to be brought fully into being. Much of it, for example, had the form of instructions to the viewer, who helped realize the work by following the instructions, if only in imagination. The ladder/painting was a kind of tacit instruction, saying, in effect, like something in Alice in Wonderland, "Climb me." Somehow I love the fact that John Lennon was there at all, given what I imagine must have been the noisy public world of the Beatles, full of electric guitars and screaming young girls. Lennon climbed the ladder and read the word, which made a great impression on him. "So it was positive," he later said. "It's a great relief when you get up the ladder and you look through the spyglass and it doesn't say no or fuck you; it says YES." There was only the simple affirmative rather than the "negative... smash-the-piano-with-a-hammer, break-the-sculpture boring, negative crap. That 'YES' made me stay." It would be difficult to think of a work of art at once that minimal and that transformative.
"YES" is the name of a wonderful exhibition at the Japan Society, much of it given over to the works for which, other than to scholars of the avant-garde, Yoko Ono is almost entirely unknown. I refer to the work from the early sixties, a blend of Fluxus, Cage, Duchamp and Zen, but with a poetry uniquely Ono's own. The most innovative of the early works are the Instructions for Paintings, which tell the viewer what to do in order for the work to exist. These have the form of brief poems. Here, for example, is the instruction for a work called Smoke Painting:
Light canvas or any finished painting
with a cigarette at any time for any
length of time.
See the smoke movement.
The painting ends when the whole
canvas or painting is gone.
Here is another, called Painting in Three Stanzas:
Let a vine grow.
Water every day.
The first stanza--till the vine spreads.
The second stanza--till the vine withers.
The third stanza--till the wall vanishes.
Now these are instructions for the execution of a work, not the work itself. They exist for the purpose of being followed, like orders. In formal fact, the instructions are very attractive, written out in gracious Japanese calligraphy by, as it happens, Yoko Ono's first husband, Ichiyanagi Toshi, an avant-garde composer. It is true that the conception was hers, but by means of whose handwriting the conception should be inscribed is entirely external. Nothing could be closer to Duchamp's idea of removing the artist's hand from the processes of art. Duchamp was interested in an entirely cerebral art--the object was merely a means. And so these attractive sheets of spidery writing are merely means: The work is the thought they convey. "Let people copy or photograph your paintings," Ono wrote in 1964. "Destroy the originals." So the above instructions, in numbers equal to the press run of The Nation plus however many pass-alongs or photocopies may be made of this review, are as much or as little of "the work" as what you would see on the walls of the gallery. The question is not how prettily they are presented or even in what language they are written. The question is how they are received and what the reader of them does to make them true: The instructions must be followed for the work really to exist.
So how are we to comply? Well, we could trudge out to the hardware store, buy a shovel, pick up a vine somewhere, dig a hole, plant the vine, water it daily--and wait for the wall against which the vine spreads to vanish. Or we can imagine all this. The work exists in the mind of the artist and then in the mind of the viewer: The instructions mediate between the two. At the Indica Gallery, Ono exhibited Painting to Hammer a Nail. A small panel hung high on the wall, with a hammer hanging from its lower left corner. Beneath it was a chair, with--I believe--a small container of nails. If you wanted to comply with the implicit instructions, you took a nail, mounted the somewhat rickety chair, grasped the hammer and drove the nail in. At the opening, Ono recalls, "A person came and asked if it was alright to hammer a nail in the painting. I said it was alright if he pays 5 shillings. Instead of paying the 5 shillings, he asked if it was alright to hammer an imaginary nail in. That was John Lennon. I thought, so I met a guy who plays the same game I played." Lennon said, "And that's when we really met. That's when we locked eyes and she got it and I got it and, as they say in all the interviews we do, the rest is history."
Jasper Johns once issued a set of instructions that became famous: "Take an object./Do something to it./Do something else to it." Ono's version would be "Imagine an object./Imagine doing something to it./Imagine doing something else to it." Ono's enthusiasts like to say how far ahead of her time she was, based on some entirely superficial parallels between her Instructions for Paintings and certain works of Conceptual Art, which also consisted of words hung on the wall. Thus in 1967 Joseph Kosuth composed a work that reproduced the definition of the word "Idea" as it appears in a dictionary. The title of the work is Art as Idea as Idea. The work of art is the idea of idea (Spinoza--profoundly--defined the mind as idea ideae). For reasons entirely different from Ono's, Kosuth was bent on transforming art into thought.
Art historians are always eager to establish priority, usually by finding resemblances that have little to do with one another. In truth, Ono was precisely of her own time. It was a time when the very idea of art was under re-examination by artists. Works of art can never have been more grossly material--heavy, oily, fat--than under the auspices of Abstract Expressionism. But the aesthetic experiments of Cage, of Fluxus and of Yoko Ono were not, in my view, addressed to the overthrow of Abstract Expressionism. They were rather applications of a set of ideas about boundaries--between artworks and ordinary things, between music and noise, between dance and mere bodily movement, between score and performance, between action and imagining action, between artist and audience. If the impulse came from anywhere, it came from Zen. Cage was an adept of Zen, which he transmitted through his seminars in experimental composition at The New School. Dr. Suzuki, who taught his course in Zen at Columbia, was a cult figure for the art world of the fifties. Yoko Ono had absorbed Zen thought and practice in Japan. The aim of Zen instructions was to induce enlightenment in the mind of the auditor, to transform his or her vision of world and self. The aim of Ono's instructions was similarly to induce enlightenment in the mind of the viewer--but it would be enlightenment about the being of art as the reimagination of the imagined. In her fine catalogue essay, Alexandra Munroe, director of the Japan Society Gallery, writes, "Asian art and thought were the preferred paradigm for much of the American avant-garde." Abstract Expressionism and the New York avant-garde exemplified by Cage, Fluxus and Ono belong to disjointed histories that happened to intersect in Manhattan at the same moment.
At the time of their marriage, Ono said that she and John Lennon would make many performances together, and the fact that Lennon set foot in the Indica Gallery in the first place and engaged with Yoko Ono in that atmosphere implies that he found something in art that was lacking in the world of popular music, for all his great success. It is characteristic that for him, art meant performance--not painting on the side, which was to become an outlet for his fellow Beatle Paul McCartney (there is an exhibition of McCartney's paintings making the rounds today). What Ono offered Lennon was a more fulfilling way of making art, and inevitably she was blamed for the dissolution of the band. What Lennon offered Ono was a way of using her art to change minds not just in terms of the nature of art and reality but in terms of war and peace. In 1968 Yoko Ono declared that "the art circle from which I came is very dead, so I am very thrilled to be in communication with worldwide people." One of Yoko Ono's most inspired pieces was her White Chess Set of 1966 (a version of which, Play It By Trust, can be seen in the Japan Society lobby). Instead of two opposing sides, one black and one white, she painted everything--the board and the pieces--white. Since one cannot tell which pieces belong on which side, the game quickly falls apart. "The players lose track of their pieces as the game progresses; Ideally this leads to a shared understanding of their mutual concerns and a new relationship based on empathy rather than opposition. Peace is then attained on a small scale." But with Lennon, she and he could attempt to achieve peace on the largest scale--could use art to transform minds. In 1969, for example, they enacted their Bed-in for Peace. The tremendous widening of the concept of art earlier in the decade made it possible for being in bed together to be a work of art. The press was invited into their hotel bedrooms, gathered around the marital bed, to discuss a new philosophy in which, as in White Chess Set, love and togetherness replaced conflict and competition. In the same year the couple caused billboards to be erected in many languages in many cities, as a kind of Christmas greeting from John and Yoko. The message was WAR IS OVER! (in large letters), with, just beneath (in smaller letters), IF YOU WANT IT. There was no definite article: The sign was not declaring the end of the Vietnam War as such but the end of war as a human condition. All you have to do, as their anthem proclaimed, was GIVE PEACE A CHANCE. Get in bed; make love, not war.
There is a somewhat darker side to Ono's work than I have so far implied. In a curious way, her masterpiece is Cut Piece, a performance enacted by her on several occasions, including at Carnegie Recital Hall in 1965. Ono sits impassively on the stage, like a beautiful resigned martyr, while the audience is invited to come and cut away a piece of her clothing. One by one, they mount the stage, as we see in a video at the Japan Society, and cut off part of what she is wearing. One of the cutters is a man, who cuts the shoulder straps on her undergarment. The artist raises her hands to protect her breasts, but does nothing to stop the action. Ideally the cutting continues until she is stripped bare. I find it a very violent piece, reminding me somehow of Stanley Milgram's experiment in psychology, in which people are encouraged to administer what they believe are electrical shocks to the subject (who pretends to be in agony). The audience has certainly overcome, a bit too gleefully, the gap between art and life--it is after all a flesh-and-blood woman they are stripping piecemeal with shears. It reveals something scary about us that we are prepared to participate in a work like that.
Another film, Fly, shows a housefly exploring the naked body of a young woman who lies immobile as the fly moves in and out of the crevices of her body, or moves its forelegs, surmounting one of her nipples. The soundtrack is uncanny, and we do not know if it is the voice of the fly, the suppressed voice of the woman or the weeping voice of an outside witness to what feels like--what is--a sexual violation. It is like the voiced agony of a woman with her tongue cut out. The sounds are like no others I have heard. Yoko Ono is a highly trained musician who gave her first concert at 4 and who sang opera and lieder when she was young. But she is also a disciple of Cage and an avant-garde singer who uses verbal sobs, damped screams, deflected pleas, to convey the feeling of bodily invasion.
Yoko Ono is really one of the most original artists of the last half-century. Her fame made her almost impossible to see. When she made the art for which her husband admired and loved her, it required a very developed avant-garde sensibility to see it as anything but ephemeral. The exhibition at the Japan Society makes it possible for those with patience and imagination to constitute her achievement in their minds, where it really belongs. It is an art as rewarding as it is demanding.
Travel writing is a dismal art. From Herodotus, wide-eyed (and perhaps more than a little disoriented) in an India of man-eating ants and black sperm; to Ibn Batuta, the fourteenth-century Arab wanderer who endured the thirst and marauding tribesmen of the Sahara; to Graham Greene in lawless Mexico and Redmond O'Hanlon on the untamable Amazon: The classics of the genre are journeys into the night, tales of loneliness and hardship and danger. As Ian Jack puts it, no traveler has written a better--or more exemplary--sentence than Captain Scott, who stood at the South Pole in January 1912 and wrote in his diary, "Dear God, this is an awful place."
Certainly, one would be hard pressed to find many finer sentences in Eastward to Tartary, Robert Kaplan's latest installment of gloom and hopelessness, an account of his travels in the Balkans, the Middle East and the Caucasus. Kaplan likes to quote Shakespeare and Gogol, and he has elsewhere extolled the usefulness of Conrad's writing in political analysis, but his own prose chokes on stilted aphorisms and anodyne observations. "Relative change, more than absolute change, is what history is often about," he concludes at a Romanian border post. Traveling by train between Bulgaria and Turkey, Kaplan comes to the realization that "the idea that the Internet and other new technologies annihilate distance is a half-truth." "You see, Robert," one of his informants tells him, "Hungarian nationalism, Romanian nationalism--they're all bad."
Perhaps the best that can be said about Kaplan's writing is that what it lacks in elegance, it makes up for in earnestness: As V.S. Naipaul--another traveler with a dyspeptic view of the world--has written of Conrad, his vision is flawed and unremittingly "dismal, but deeply felt." As in his earlier books--cataclysmic travelogues with titles like The Coming Anarchy, The Ends of the Earth and An Empire Wilderness--Kaplan shrouds the world in darkness, lamenting the "imprisoning desolation" and "Brezhnevian gloom" of the lands he visits. In the former Yugoslavia, in Africa, even in his own United States, whose decline he predicted in An Empire Wilderness, Kaplan has never met a society that wasn't falling apart. This dogged credo has earned him much notoriety and a considerable degree of influence: A correspondent for The Atlantic Monthly, his essays are circulated in the White House and National Security Council, and his portrait of intractable "ancient hatreds" in Bosnia famously led President Clinton to conclude that intervening in the Yugoslav war would result in a quagmire (a dubious achievement that Kaplan has himself disowned). Over the course of two decades, Kaplan has established himself as the leading chronicler of the post-Communist Pax Americana, a grim reaper whose seamy version of globalization contrasts sharply with so many of the sunny--and often flippant--promises of global culture and prosperity.
Like those of many doom-mongering travelers--and like Conrad, memorably called a "bloody racist" by Chinua Achebe--Kaplan's jeremiads against the rot of the non-Western world have drawn charges of ill-informed prejudice. The Somali writer Nuruddin Farah has even suggested that Kaplan's forlorn vision of Africa was the result of a mefloquine-induced hallucination. But while it is true that Kaplan sometimes slips into mortifying disquisitions on "Asiatic" despotism and "the exotic confusion of the Orient," he deserves to be taken more seriously. In retrospect, what's striking about his books is not so much their bleakness as their prescience. Balkan Ghosts, written in 1989 and rejected by fourteen publishers before it was finally published at the start of the Yugoslav war, was an unheeded warning of the disintegration to come. In 1997, as the West was only beginning to awaken from its "end of history" delirium, Kaplan published a provocative essay in which he asked if "democracy was just a moment." (The essay coincided with an influential article by Fareed Zakaria, then the editor of Foreign Affairs, in which he similarly lamented the rise of "illiberal democracy.") The Coming Anarchy, whose eponymous essay has earned Kaplan the greatest opprobrium, was less pessimistic than downright hysterical. But it, too, evinced a remarkable ability to pierce the self-serving delusions of an African revival being bandied about by Western policy-makers. Today, as Central Africa burns amid what Madeleine Albright has called "Africa's first world war," Kaplan's portrait of civil war and disease and institutional meltdown is sadly accurate.
Eastward to Tartary returns to many of Kaplan's pet themes--indeed, one of the troublesome aspects of the book is that it sometimes seems like a not-altogether-comfortable imposition of old ideas on new geography. Traveling through some fourteen countries, Kaplan finds the familiar "erosion of [the] nation-state," pull of "blood loyalties" and evidence that "democracy was leading to separation, not reconciliation." It's hard to discern an overarching argument within the book's peripatetic structure, but the general thrust of Eastward to Tartary appears to be a return to the inferno. "Anarchy in some form or other, as I had seen, was almost everywhere," Kaplan writes near the end of the book, foreseeing "revolutionary upheaval" and "disintegrated" nations in a region where institutions are weak and ethnic strife is filling the vacuum left by the Soviet Union's collapse. The book is also a restatement of Kaplan's philosophy of political realism, a cynical faith--with intellectual roots in the writing of Thucydides and Machiavelli, both of whom Kaplan cites repeatedly--that politics is the exercise of self-interest. "What had I learned?" Kaplan asks as he ponders 4,000 miles of travel. The answer: "That power and self-interest would shape the immediate future, at least in this part of the world."
These aren't cheerful thoughts, but I fear that Kaplan's Hobbesian vision will once again prove prophetic--although, as in those earlier books, only partly so. The problem with Kaplan's bleakness is that it tends to overreach, as though driven as much by a craving for attention as by the urge to report faithfully. Kaplan's gloom is narcissistic; in love with itself, it can't get enough of its own darkness, always grasping beyond the limits of reasonable skepticism toward apocalypticism. The result is a certain misalignment of vision: Taken with the momentum of his own morose logic, he misses the real story. This tendency was notoriously pronounced in The Coming Anarchy, in which Kaplan not only foresaw (quite reasonably) a "bifurcated world"--one populated by comfortable citizens of the West, the other by the deprived denizens of the Third World--but went on to argue that, gradually, the boundaries between the two worlds would blur. "West Africa's future," he wrote, "will also be that of most of the rest of the world." What's more, Kaplan suggested, casting the shadow of his pessimism even wider, Africa (and the rest of the Third World, where Kaplan saw similar anarchy) would be responsible for the West's collapse. Like insidious viruses, the shantytowns, civil wars and tribal hatreds would slip through the borders of disintegrating nation-states, infecting the West with a "terrifying array of problems that will define a new threat to our security."
The irony, of course, is that the tragedy of the earth's wretched is in many respects precisely the opposite: that they will never escape the centrifugal pull of their collapsing societies, that today's electrified fences and immigration counters keep misery in its place more effectively than the mountains and deserts and icescapes that separated nations in an earlier age of travel. Kaplan's ambition is large: He claims allegiance to "the lay of the land" and the stories of "individuals," but, setting himself alongside Herodotus and Gibbon (to whose contemporary relevance The Coming Anarchy includes a paean), his real master is the grand sweep of History.
Under such tutelage, Kaplan becomes an incorrigible didact, turning every anecdote into an occasion for explication and instruction. The "extortionist cost" ($45) of a Turkish visa, he writes, was "part of a larger political story...that had not quite made it through the world media filter." While getting his shoes polished on the street in Turkey, he reflects that "commonplace but elaborate traditions such as baking bread and shoe maintenance...[allow] Turks to enjoy the benefits of global materialism without losing their identity." Apart from making him sound like the quintessential American tourist, Kaplan's determination to squeeze meaning out of every incident strains credibility. Just outside a decrepit train station in Bulgaria, Kaplan sees "a city of homeless youth and impoverished gypsies." This innocuous scene of poverty surrounding a railway station--ubiquitous in transport centers throughout the Third World, or indeed, visible in New York's Port Authority terminal--is proof for Kaplan that "tyranny creates a social vacuum," evidence that "social anarchy was never far from the surface here." Such moments verge on the incomprehensible: Kaplan's barren moonscapes are so devoid of redemption, so overflowing with suffering, that they appear as from a different reality.
To be fair, Eastward to Tartary's prognosis is, overall, more reasonable than The Coming Anarchy's, but it displays the same propensity for exaggeration. Kaplan is probably right that the countries he visits--the Caucasus in particular--are a caldron of ethnic hatred and political instability. He may be right, too, that a democratic free-for-all could exacerbate that instability (which is not an argument against democracy but against demagogic practitioners of democracy). But Kaplan is not satisfied with these insights. Donning his soothsayer's mantle, he prophesies Yugoslavia Round Two. "In the Caucasus, tribe and clan--not formal institutions--have always been the key to politics," he argues, apparently resurrecting the "ancient hatreds" that got him into such trouble in Balkan Ghosts. And, in fact, close on the heels of his recycled tribalism come predictions of Bosnia-style implosion. Eastward to Tartary, billed as a sequel to Balkan Ghosts, is overflowing with analogies and references to another war, in another part of the world. Indeed, part of Kaplan's purpose in writing this book is to "introduce Tartary (known today as Central Asia) as a place that has more in common with the Western Balkan countries than with the Oriental images conjured up by its exotic name." Thus, in Jordan, Prince Hassan shares with Kaplan his fears of a "balkanized Middle East with ethnic-sectarian conflict"; in Georgia, Kaplan hypothesizes that "the West would have to prove as muscular...as in the Balkans if it chose to keep these states alive"; and the Caucasus in general, for which Kaplan reserves his most ominous warnings, is destined to slide into chaos, abandoned and ignored by the West, "the Balkans of the future."
This is too dire, and sits uncomfortably with some of Kaplan's own observations. Although he pays less attention to it than might be expected, Kaplan discusses what the Pakistani journalist Ahmed Rashid, referring to the nineteenth-century quest for empire in Central Asia known as "the great game," has called "the new great game" of pipeline politics. In Azerbaijan and Turkmenistan, Kaplan discovers the scramble for oil wealth that has transformed what would no doubt be another neglected corner of the world into a game of "geostrategic poker." At times, discussing this game of stakes, Kaplan seems to backtrack on his grim predictions that the West will abandon Central Asia to its fate. But the clouds never lift completely, and he is soon back on song, remarking skeptically that "remaking this part of the world...would take both the resolve of a missionary and a sheer appetite for power that the West could probably never muster." This is a curious--not to mention naïve--position for someone so wedded to the belief that states act in their self-interest. Surely the lesson of the Gulf War is that the West is quite prepared to go to battle over oil? Bosnia is a poor prism for this part of the world. Despite all the pieties about the West's fundamental interests--motivated in no small part by memories of Sarajevo's ignition of World War I and a questionable faith in the cyclical nature of history--the greatest threat posed by the implosion of Yugoslavia to its powerful neighbors was never more than a wave of refugees (and a certain aesthetic discomfort). There was no oil to defend in Bosnia; as David Rieff and Michael Ignatieff, among others, have consistently argued, the case for intervention was always based on an idealistic commitment to the alleviation of misery, and that commitment--as evidenced by Boutros Boutros-Ghali's infamous assertion to the citizens of Sarajevo that he could list at least ten places in the world worse off than they were--never ranked very high in Western priorities.
So why, despite his own apparent misgivings, is Kaplan so stubbornly attached to his trope of the Balkans? Ego may have something to do with it, as may, ironically, a certain sentimentalism. Tucked between the lines of his hard-nosed realism, Kaplan often displays a certain missionary zeal to save the miserable societies he visits. "Travel writing is only a vehicle to do something else," he has said. But a vehicle for what? Read enough of Kaplan and you start thinking that he protests too much--that all the sound and fury could well be partly an attempt to frighten the West into action. Perhaps this explains the poignant sense of loss, the almost elegiac quality, that sometimes infuses his descriptions of political and social breakdown. In Turkmenistan, in a vacant lot "filled with rusty metal and the omnipresent smell of oil," Kaplan meets his friend Anna, part Armenian, part Azeri Turk. Anna admires a rose; Kaplan reflects that, in this dismal landscape, "you must learn to extract pleasure from small things." Anna tells him--"in anguish"--about the loss of her cosmopolitan world and the rise of ethnic politics that has followed the Soviet Union's collapse. Kaplan sums it up in a line: "An empire had collapsed," he writes, "and all that remained were blood loyalties."
It's an evocative and perhaps even profound sentence--but it also suggests what seems to be the real reason for Kaplan's attachment to Bosnia. His repeated invocations of collapsing empires and orphaned states are indications of his lingering fascination with the death of what the Polish writer Ryszard Kapuscinski has called the Soviet "imperium." Kaplan is a man of his times: He sees the world as a chessboard of competing empires, and his sepulchral vision is filtered through the lens of that era's paradigmatic political disasters. "What Vietnam was to the 1960s and 1970s, what Lebanon and Afghanistan were to the 1980s, and what the Balkans were to the 1990s, the Caspian region might be to the first decade of the new century," he writes. This is formulaic and, eleven years after the collapse of the Berlin wall, anachronistic. The world has moved on, new political forces are emerging--forces that Kaplan, ever with one eye on History, overlooks.
There is an important piece of the puzzle missing in Kaplan's descriptions of Islamic revivalism and Central Asian ferment. Although it does contain a few cameo appearances, Eastward to Tartary has surprisingly little to say about the clerical warriors of the Taliban, whose revolution in Afghanistan is sending tremors across South and Central Asia and reshaping the area Kaplan explores. As Ahmed Rashid argues in Taliban, his excellent insider's account of the continental upheaval, Afghanistan has "held Central Asia in a tight embrace for centuries," and now the rise of Taliban-sponsored fundamentalism is "sending shock waves" throughout the region. Kaplan encounters the pan-Islamic sentiments sweeping the region, but he hardly mentions the movement from which those sentiments are drawing inspiration, and in many cases material support. He describes some of the strange bedfellows emerging in Central Asia, but he only scratches the surface of the geopolitical transformations. From Russia (struggling with Islamic rebellion in Chechnya) to India (bearing the brunt of Taliban-trained militancy in Kashmir) to Shiite Iran (determined to limit the influence of the Taliban's Sunni revolution) to the United States (which has already sent its missiles in search of Afghanistan-sheltered Osama bin Laden), the world's powers are suddenly finding that they have a stake in Central Asia. Russia, which recently bullied its former Central Asian colonies into a security alliance to combat Islamic terrorism, just signed a similar agreement with India. The United States, too, has signed a counterterrorism memorandum with India, and--as Rashid recently told me--it has been conducting talks with Iran on a common strategy to handle the Taliban. China (with a nervous eye on its separatist Muslim province of Xinjiang), Pakistan and the Arab monarchies (confronted with a genie they unleashed but can no longer control), and even Indonesia and Malaysia (where the economic crisis has led to a resurgence of Islamic sentiment) are being drawn into a complex and treacherous struggle for influence in South and Central Asia.
None of this suggests that the region will be spared the mayhem envisioned by Kaplan--in fact, the mosaic of outside interests may only make matters worse. But it does suggest that far from being an orphaned corner of the post-Communist world--another Bosnia--Central Asia is emerging as the fault line in a new ideological conflict. Kaplan's view of the impending chaos is resolutely local: In his version of the post-Soviet vacuum, there is no room for such transnational alliances and interests, only for primeval ethnic and tribal ties, countries tearing themselves apart from within. Indeed, in a recent article on Pakistan, Kaplan discounts the role of the Taliban in South Asian instability, blaming instead the "bewildering complexity of ethnic and religious divisions" in Pakistan. In the process, he ignores the extent to which those divisions are being exacerbated by the fundamentalist influence of the Taliban.
Given his penchant for grand narratives, it's a little strange that Kaplan misses the larger picture, the broad canvas upon which the events he describes are unfolding. But that's the danger of serving history too faithfully. In Georgia, a man named Alexander Rondeli warns Kaplan about this. "All of us," he says, speaking of the stubbornness of ethnic animosity, "have this heavy weight from the past attached to our legs. We can only move forward while looking back." Kaplan is like Rondeli: Standing at ground zero of an emerging geopolitical order, he remains haunted by his Balkan ghosts, predicting the future while staring at the past.
December 8, 2000: It was twenty years ago today that Mark David Chapman shot and killed John Lennon outside the Dakota on West 72nd Street in New York City, bringing whatever was left of the sixties to a definitive and miserable end. Yet Lennon lives on--not just for his now-graying fans, not just for younger kids discovering the Beatles, but in some unexpected and surprising ways.
Case in point: At the Republican National Convention in Philadelphia this past August, as Dick Cheney stepped up to the podium to accept the party's nomination as vice presidential candidate, the band struck up a spirited version of Lennon's song "Come Together." This is the one on the Abbey Road album that begins "Here come ol' flattop" (Cheney of course is mostly bald), and continues, "One thing I can tell you is you got to be free"--a sixties sentiment that meant something quite different from tax cuts for the rich.
Cheney probably didn't know that Lennon started writing "Come Together" as a campaign song--for Timothy Leary's planned 1970 campaign for California governor against Ronald Reagan. Leary never used the song, but Lennon sang it live onstage at Madison Square Garden in 1972 in the midst of another presidential campaign, when Nixon was trying to have him deported to silence a prominent voice of the antiwar movement. Lennon changed the title line to "Come together--stop the war--right now!" and the audience cheered wildly.
The Democrats also played a Lennon song at their convention: They used "Imagine" as the theme of a tribute to Jimmy Carter. While the giant video showed Jimmy and Rosalynn hammering nails and fondling small children, the easy-listening version of Lennon's song omitted the words "Imagine there's no heaven/it's easy if you try/No hell below us/Above us only sky"--not really appropriate for America's first born-again Baptist President.
"Imagine" is a utopian anthem, and the utopian imagination was always a keystone of sixties New Left thought, distinguishing it from the bread-and-butter politics of traditional working-class socialism. "Power to the imagination" was a key slogan written on the walls in May '68. Today the country is full of billboards urging people to "Dial 1-800-imagine." I tried it. You don't get John Lennon singing "Imagine no possessions." Instead you get AT&T Wireless Services: Press 1 to upgrade your wireless plan, press 2 to inquire about new service, press 3 to inquire about an order and, of course, press 4 to hear these options again.
A search of the Nexis database found these variants on Lennon's "Imagine no possessions": a Republican who said "Imagine no estate tax," a television critic who wrote "Imagine no more Regis," a technophobe who wrote "Imagine no computers" and a Democratic pundit who headlined an opinion piece, "Imagine There's No Nader."
Lennon lyrics appear in print in some other unlikely places. When Time put Bill Clinton on its cover at the beginning of his first term, the cover line was "You Say You Want a Revolution." Two years later, when the Republicans won control of the House, the New York Times ran an opinion piece by R.W. Apple Jr. headlined "You Say You Want a Devolution." And just a few months ago, after Joe Lieberman changed his mind about privatizing Social Security, The New Republic headline read "You say you want an Evolution."
The headline writers probably had forgotten that Lennon wrote "Revolution" in response to the May '68 uprisings in Paris, criticizing student radicals for advocating violence. He recorded two versions of the song. The single--the "fast" version--came first. It was recorded on May 30, 1968, and released in the United States in August, shortly after the police riot at the Democratic National Convention in Chicago. After the opening line--"You say you want a revolution"--it concluded, "count me out." The radical press was outraged. Ramparts called the song "a betrayal"; New Left Review called it "a lamentable petty bourgeois cry of fear." Time, on the other hand, reported that the Beatles had criticized "radical activists the world over," which Time found "exhilarating." The second, "slow" version of the song was released on the White Album two months later. Now, after the line "count me out," Lennon added another word: "in." He later explained, "I put both in because I wasn't sure." A year later he was singing "Power to the People."
Lennon's "Give Peace a Chance" was sung by half a million antiwar demonstrators at the Washington Monument in 1969, but since then it's come in for some revisionism. I remember militant friends back in those days singing "Give the dictatorship of the proletariat a chance." Then there's "Give War a Chance," which pops up every once in a while--the establishment journal Foreign Affairs used it as the title of a 1999 article by Edward Luttwak arguing against US intervention in local conflicts. Frontline broadcast a story on the Balkans in 1999 with the same title, and P.J. O'Rourke used Give War a Chance as the title for a book that became a bestseller. On the other hand, none other than Trent Lott uttered the words "give peace a chance" on the floor of the Senate--talking about Kosovo. Finally, a company called Peace Software (www.peace.com) is using the slogan "Give Peace a Chance."
Lennon's most intense and personal post-Beatle song, "God," a very slow track on his first solo album, contains a litany that concluded, "I don't believe in Beatles." The New York Times ran a full-page interview in September with Philip Leider, the founding editor of ArtForum, that included his own personal version of the lyrics, which took up twenty-three lines of our newspaper of record. Warhol came first: "I don't beleeve in Andy." Then: "I don't beleeve in Haring"; "I don't beleeve in Fischl"; "I don't beleeve in Koons"; and so on through nineteen more current art stars.
Several of Lennon's most memorable lines have not been appropriated by pundits or Op-Ed types: "Instant Karma's gonna get you" remains untouched, at least according to Nexis, and thus far nobody has found a way to use "I am the walrus, goo-goo g'joob." But aside from these notable exceptions, the conclusion is clear: John Lennon may be gone, but twenty years after his death his words and ideas are here, there and everywhere.
A quarter-million people thronged Abraham Lincoln's Memorial that day. In the sweltering August humidity, executive secretary Roy Wilkins gravely announced that Dr. William Edward Burghardt Du Bois--NAACP founding father and "senior intellectual militant of his people"--had died in exile the day before.
It's easy to forget. What we now think of, monolithically, as the civil rights movement was at the time a splintering half-dozen special-interest groups in ill-coordinated pitched camps. Thurgood Marshall, never known for tact or political correctitude, called the Nation of Islam "a buncha thugs organized from prisons and financed, I'm sure, by some Arab Group." The NOI viewed the Urban League as a black front for a white agenda. A fringe figure gaining notoriety for his recent Playboy interview with an obscure journalist named Alex Haley, Malcolm X irreverently dismissed both "the farce on Washington" and the young minister just moments away from oratorical immortality, the Rev. Dr. Martin Luther King Jr., as "Bishop Chickenwings."
If the legacy of Du Bois's long life was unclear then, what can it all mean now? What possessed him to renounce the widely coveted citizenship for which those gathered there that day--inspired in part by his example--were marching? What can a scholarly biography of the patron saint of African-American intellectuals--written by a tenured professor for a prestigious publishing house, impatiently awaited by specialists and educated generalists alike--what can all this mean to 101 million eligible nonvoters "entirely ignorant of my work and quite indifferent to it," as Du Bois said in his time, much less to 30 million African-Americans beyond the Talented Tenth and those few old-timers in Harlem who remember Du Bois as being, mostly, a remarkably crotchety old man?
With these mixed feelings of pleasure, gratitude, frustration and momentous occasion, I read the monumentally ambitious sequel, seven years in the making, itself a National Book Award finalist, to David Levering Lewis's Pulitzer Prize-winning Biography of a Race, 1868-1919.
"I remember well," Du Bois wrote, famously, "when the shadow swept across me." He was born "a tangle of New England lineages"--Dutch, Bantu, French Huguenot--within living memory of the Fourteenth Amendment and The Communist Manifesto, one generation removed from slavery. And though he laid claim to both his African and European heritage, still it was a peculiar sensation. "One ever feels his two-ness--an American, a Negro; two souls, two thoughts, two unreconciled strivings; two warring ideals in one dark body, whose dogged strength alone keeps it from being torn asunder." Yet Du Bois knew full well that had he not felt, very early on, this double-consciousness, he might easily have become just another "unquestioning worshiper at the shrine of the established social order."
Willie D. charted his course as early as his teens, inaugurating his writing and public-speaking careers with articles in the Springfield Republican and a high school valedictory address on abolitionist Wendell Phillips. He arrived at the Harvard of Santayana and William James, who thought him easily among the most gifted of his students, already notorious for the "arrogant rectitude" others would resent all his life. He graduated cum laude, honing his prose with a rigorously liberal education in Latin, Greek, modern languages, literature, history and philosophy. But for a graduate student in sociology during the 1890s, Max Weber's Berlin, not Cambridge, was the place to be. And it was there, chain-smoking fluent German, celebrating both his 25th birthday and "his own genius," that W.E.B. Du Bois spelled out his life's ambition: "to make a name in science, to make a name in literature, to raise my race." Only because his scholarship ran out did Du Bois return to America for the consolation prize: Harvard's first African-American PhD.
Atlanta, after Europe and the North, came as a shock. Not that the recent lynching was in itself any great surprise. Du Bois simply wasn't prepared, passing by the local grocer, to see the souvenirs of severed fingers on display out front. Headquartered at Atlanta University, for the next twelve years he taught history and economics. By the time Frederick Douglass died in 1895, the Tuskegee model of black higher education was dominant, and Booker T. Washington its leading lobbyist. That same year Washington, whose power had been growing since 1885, had delivered his famous Atlanta Exposition speech: "In all things purely social," he said, holding up both hands, digits spread wide, "we can be as separate as the [five] fingers"--he paused dramatically, clenching each hand into a fist--"yet as the hand in all things essential to mutual progress." Convinced that Washington's appeasement had paved the way for Plessy v. Ferguson in 1896, Du Bois and other black intellectuals felt sold down the river. Du Bois's scathing review of Washington's Up From Slavery (1901), declaring war on merely vocational training of a "contented and industrious peasantry," was collected in The Souls of Black Folk (1903). Du Bois and Washington came, notoriously, to ideological blows. It was the beginning of the end for Booker T. Washington.
Yet there was no personal animus between them. Shrewdly, Washington tried to hire Du Bois away to Tuskegee, even taking him along on one of his fundraising junkets. But once at Andrew Carnegie's office, Washington--who knew where his bread was buttered and that Du Bois could be counted on not to keep his mouth shut--left him waiting downstairs. "Have you," Washington asked, "read Mr. Carnegie's book?" W.E.B. allowed he had not. "You ought to," said Booker T. "Mr. Carnegie likes it."
Around 1909, certain Niagara Movement radicals and Jewish abolitionist holdovers formed a coalition that became the NAACP. Du Bois moved to New York, where, as editor of The Crisis for the next twenty-five years, his word was gospel.
Meanwhile, Marcus Garvey addressed a Harlem crowd of 2,000 in 1917, preaching black economic independence and resettlement. He even offered, to the resurgent Klan's delight, to transport them back to Africa. Now, the masses might be fooled by the plumed and gold-braided pretensions and Napoleonic pageantry of
the Emperor Marcus Mosiah Garvey--self-proclaimed High Potentate and Provisional President-General of all Africa, Commander in Chief of the Black Star Line, an entire fleet of three dubiously seaworthy vessels--with his back-to-the-motherland schemes, his dukes and duchesses of Uganda and Niger, his knight commanders of the Distinguished Order of Ethiopia and the Nile. But Du Bois, who had just returned from Firestone's Liberia as diplomatic envoy, knew better. (Besides, everybody who was anybody knew that what Garvey's Universal Negro Improvement Association really stood for was "Ugliest Negroes in America.") As far as Du Bois was concerned, Garvey was either a lunatic or a traitor. Whereas, it seemed to Garvey--who saw Du Bois's NAACP as the National Association for the Advancement of Certain People--that the lunacy was for blacks to expect equality in America. In the end, his daring, energy and charisma were surpassed only by his ignorance of finance. Du Bois sounded the rallying cry: "Garvey Must Go." The FBI agreed. And if deportation on the grounds of being an undesirable alien wouldn't hold up in court, mail fraud would do nicely. Arrested in 1922, tried and convicted in 1923, Garvey took up residence at Atlanta Federal two years before Malcolm X was born.
Remember, back before they were Jim Crowed into academic ghettos, when history was literature and vice versa? When nonspecialists read Macaulay, Michelet? Poet, short-story writer, essayist and novelist as well as historian, Du Bois was by no means master of all the genres he assayed. But he electrified African-American literature as writer during the twentieth century's first decade. Then, as editor, he paved the way for younger writers during subsequent decades. Biography, however, is a late development in the tradition. What advances have eminent African-Americans like David Levering Lewis made in that "most delicate and humane of all the branches of the art of writing"? And do his tomes amount to a "masterpiece of the biographer's craft"?
With their cast of legendary characters, colorful set locations, gripping storylines and virtuoso draftsmanship, they certainly aspire to it. For analytical rigor, judicious gossip and subtle insight into the social, political and economic "roots and ramifications" of "racial, religious, and ethnic confrontation, and assimilation in America" between Reconstruction and the civil rights movement, Lewis is fully equal to the task of his formidable subject. And his lucid, downright old-fashioned good writing, so full of fine flourishes and phrases, is mostly innocent of academic jargon. So much so that for years--visiting the same archives, examining the same documents and cross-examining the same witnesses while working my way carefully through these volumes, underlining passages in mechanical pencil, leaving yellow flags on every other page--I kept trying to figure out my misgivings.
And then it hit me. The problem here is not one of length--Boswell's massive Life of Samuel Johnson still startles, 200 years later--but scale, of Turgenev's "right relation" among a dozen or so vivifying narrative elements beyond character and what used to be called "plot." All of these together in a constant juggle of transitions--abstract to concrete, poetic to prosaic, description to dialogue, sentence length and rhythm--can create compelling momentum. Any one of these, overrelied upon in a fact-filled narrative of 1,500 pages, can be lethal. "With the 20th century," said Virginia Woolf,
a change came over biography, as it came over fiction and poetry.... the author's relation to his subject is different. He is no longer the serious and sympathetic companion, toiling slavishly in the footsteps of his hero.... Moreover, he does not think himself constrained to follow every step of the way.... he sees his subject spread about him. He chooses; he synthesizes; in short, he has ceased to be the chronicler; he has become an artist.
Cautious of overstepping the bounds of the historically permissible, the distinguished professor has crafted a straightforward chronicle. Far too often, characters are molded not organically from suggestive situation but by accretion of meticulous archival detail--endless lists of academic pedigree heaped, all at once, in static inventories of naturalistic description--then left to atrophy in the reader's mind. A compelling narrative begins where the dossier leaves off. And a good biographer is a historian, but a good historian isn't necessarily a biographer. The progression from one to the other is no more formally inevitable than that from short-story writer to novelist. But don't get me wrong. The aesthetic quibble is really by way of illustrating how close this life might have come to greatness, to the artistry of all that Lytton Strachey left out in tending toward that "becoming brevity...which excludes everything that is redundant and nothing that is significant," and which, "surely, is the first duty of the biographer."
Du Bois's influence on African-American literature, as both writer and editor, is hard to exaggerate. Between Phyllis Wheatley, the publication of Souls, the silence of Charles Chestnutt and the death of Paul Laurence Dunbar from drunken disillusionment in 1906, dozens of poets, authors and pamphleteers emerged, boycotting the happy-blacky-nappy, banjo-strumming, watermelon-eating, darky dialect of previous eras. Of this work, says James Weldon Johnson in the classic history Black Manhattan, "Some was good, most was mediocre, much was bad, and practically all of it unknown to the general public." As late as 1914, with the exception of Johnson's Autobiography of an Ex-Colored Man, there wasn't much in the way of African-American literature, and Du Bois thought things looked bleak. By 1920, New York was America's greatest city, and Harlem--a two-square-mile city within the city where a quarter-million African-Americans boasted more poets, journalists, musicians, composers, actors, dramatists and nightclubs than any other spot on earth--became the world-famous capital of black America. It seemed to Du Bois that a renaissance of American Negro literature was now due.
His lover/literary editor Jessie Fauset, to put the arts on equal footing with social policy, urged an editorial shift in the pages of The Crisis. In short order, she published Langston Hughes's "The Negro Speaks of Rivers" in 1921 and prose poetry by Jean Toomer, later collected in Cane (1923). For the first time in history--just when Du Bois feared he'd have no worthy successors--a literature of African-Americans, by African-Americans and for African-Americans and anyone else who cared to listen was not only a possibility but a reality. The Harlem Renaissance was under way.
One prodigy Du Bois particularly delighted in was pinky-ringed young poet Countee Cullen. Companionable, uncombative, anxious for the kind of credibility a tidy résumé and Harvard degree could confer, Cullen idolized Du Bois to a degree perhaps predictable in a cautious orphan risen from impoverished obscurity to international fame by the age of 22 yet lacking, in the final analysis, the kind of intellectual and artistic daring that could sustain it. Du Bois, for his part, perhaps projected onto Cullen some of the paternal pride and ambition long buried with the infant son he'd loved and lost. And so he married off his only daughter. Langston Hughes rented a tuxedo, an organist played Tannhäuser and sixteen bridesmaids wore white. The only problem--aside from the fact that Countee Cullen was gay--was that the girl admired but didn't love him. It was a match made in Hell, a dramatic example of how "spectacularly wrongheaded" Du Bois could be.
For a decade or more, the Harlem Renaissance promised 10 million African-Americans "taken for granted by one political party and despised by the other, poor and overwhelmingly rural, frightened and disunited," the illusion of an era of freedom, justice and equality undreamed of since Reconstruction. To his immense credit, Du Bois was not lulled into submission, mistrusting the impulse toward "salon exotica" and a smattering of prizes for prodigies. Then as now, the means of production--the Hollywood studios, the recording studios, the theaters--were for the most part white-owned. As early as 1926, he warned about "the politics of patronage," challenging that African-Americans would get the art that they deserved--or were willing to pay for: "If a colored man wants to publish a book, he has to get a white publisher and a white newspaper to say it's great; and then [black people] say so." (Ain't a damn thang changed.) By 1934 it had become embarrassingly clear that civil rights would not follow logically from "forceful prose" and a demonstration of artistic excellence on the part of a few Ivy League Negroes. The movement was dead, "scuttled," as chief publicist Alain Locke put it, as much from within as from without, by faddish market swings and stock speculations of Zora Neale Hurston Niggerati, on the one hand, and the liberal Negrotarians on the other.
For Du Bois, as for most African-Americans, the Depression hit harder and faster and lasted longer than for the country at large. The royal wedding had wiped out his savings, and his Crisis salary hadn't been paid for months. He was broke.
Du Bois became increasingly radicalized during the 1930s and '40s. As he saw it, the NAACP, by focusing almost exclusively on legal strategy, was beginning to work "for the black masses but not with them." In 1934, out of sync with the mainstream leadership, he left in disgust. He returned to Atlanta University, reading Das Kapital and writing Black Reconstruction in America (1935). Du Bois, who first visited the Soviet Union in 1926, returned in 1936. Home from History's frontlines a self-professed "Bolshevik," even though, as a Socialist, he combined "cultural nationalism, Scandinavian cooperativism, Booker Washington and Marx in about equal parts," Du Bois remained unconvinced that the Communist Party, which never attracted more than a few hundred black members, was their last best hope. In any case, African-Americans did not "propose to be the shock troops of the Communist Revolution."
During the McCarthy era, the black leadership, bending in the prevailing ideological winds, began to distance itself from the left. Back in New York, involved in nuclear disarmament activity declared subversive by the US government, Du Bois was arrested and tried as an unregistered agent of a foreign power. He was acquitted in 1951, but the State Department confiscated his passport, prohibiting travel abroad. It was the last straw.
The prophet was without honor only in his own country. So when the government embargo was lifted in 1958, Du Bois went on lecture tours of Eastern Europe and the Soviet Union, becoming a kind of poster boy in the Communist effort to discredit the States. He was awarded the Lenin Peace Prize in 1959, and in Red China, his birthday was declared a national holiday by Chou En-lai. Did the party use Du Bois? Or did Du Bois use the party to further his own agenda? Both, most likely.
In 1960, seventeen African states, including Kwame Nkrumah's Ghana, gained independence. At Nkrumah's invitation, Du Bois exiled himself, renouncing his American citizenship. He officially joined the Communist Party in 1961. Shrunken now and a bit stooped, his memory not quite as sharp as it once was, the scholar-citizen spent his last days in a spacious house with a view of flowering shrubs in Accra's best neighborhood, an honored guest of state, surrounded by busts of Lenin and Chairman Mao and an impressive library of Marxist thought, editing the Negro encyclopedia and receiving visitors the world over. At last, on August 27, 1963, the visionary whose long life--spanning Reconstruction, Plessy v. Ferguson, two World Wars, Brown v. Board of Education and now the civil rights movement--had been the literal embodiment of the nineteenth century's collision with the twentieth, died in Accra, where he was accorded an elaborate state funeral.
The bioepic ends, as it began 1,500 pages ago in Volume I, with the death of W.E.B. Du Bois. A living institution, he was "productive, multiple, controversial, and emblematic." His influence--as cultural ambassador, as writer and editor, as activist whose spectrum of social, political and economic thought seems refracted in phenomena as varied as Ho Chi Minh, the Negritude of poet-statesmen Aimé Césaire and Léopold Senghor as well as the Black Power movement that peaked after his death--is ubiquitous.
A difficult man as capable of coldness to old friends as he was reluctant to admit mistakes, a prickly Brahmin who walked with kings but failed to acquire the common touch, Dr. Du Bois emerges a kind of tragic hero as flawed as he was gifted. At times you wonder whether he wasn't his own most formidable enemy. But whatever his blind spots, he was only too well aware, looking backward, that battling racism real and imagined at every turn had twisted him into a far less "human" being than he might otherwise have been.
Fifteen years and two computer crashes in the research and writing, these volumes were a lifetime, literally, in the making. As a boy born in Little Rock two decades before the civil rights movement began, Lewis had a portentous encounter with the great man. Fisk man and author of books on South Africa and the Dreyfus Affair, he's now a professor of history at Rutgers. And just as Renaissance scholarship would be incomplete without When Harlem Was in Vogue, the twenty books and 100 articles of W.E.B. Du Bois's eighty-year publishing career, so handsomely anthologized in Nathan Irvin Huggins's Library of America Writings, are indispensably complemented by what is, if not a masterpiece of biography, then almost certainly the standard social, political and intellectual history of his life and times.
A half-century after the appearance of The Vital Center, Arthur Schlesinger Jr.'s spirited political polemic, we have more than sufficient cause to meditate on what might be called Dead Centrism.
To judge from magazine covers, the American divorce rate is either a disaster for children or no problem at all. First came the famous "Dan Quayle Was Right" article in The Atlantic in 1993, with a cover line that said divorce "dramatically weakens and undermines our society." Then, in 1998, Newsweek heralded The Nurture Assumption, whose author, Judith Rich Harris, argued that whether parents divorce makes little difference in children's lives because genetics and peer groups determine their problems. This September, Time featured The Unexpected Legacy of Divorce, a book whose authors, psychologists Judith Wallerstein and Julia Lewis and journalist Sandra Blakeslee, brought the gloomy news that a majority of children of divorce still suffer twenty-five years later. What's a reader to think?
The facts are not in dispute: The American divorce rate doubled in the 1960s and 1970s and has held steady or possibly declined a bit since then. At current rates, about half of all marriages will end in divorce. One million children experience a parental divorce every year. Most of them are upset in the immediate aftermath of the breakup. Some act out, others become withdrawn. Too often, fathers fail to provide adequate financial support and mothers and children see their standards of living drop. Without doubt, going through a divorce is a traumatic experience for parents and children alike.
But are most children harmed in the long term? On this question, recent media coverage has lurched between two extremes. At one end is the doomsday view that divorce sentences children to life at emotional hard labor. In this view, a parental divorce starts a chain of events that leaves most adult children anxious, unhappy and often unable to make a commitment to a partner. At the other end is the evolutionary psychologists' view that children's behavior is genetically programmed, so that whether parents divorce doesn't matter very much. According to this line of reasoning, divorce is just a flag that identifies genetically challenged families whose troubles would have occurred even if the parents had stayed together.
Said this starkly, neither extreme seems convincing. Yet it is a sad fact of public debates about social problems that the extremes tend to capture everyone's attention. Magazines are sold and talk shows are fueled by the announcement that a particular problem is devastating American society and then by the news that--wait a minute--it's really not a big problem after all. There's little patience for discussions of problems that are serious but not calamitous. And yet the gravity of many social problems lies in the demilitarized zone between the extremes.
For example, consider teenage childbearing. It was initially declared a scourge. A leading researcher wrote famously in 1968, "The girl who has an illegitimate child at the age of 16 suddenly has 90 percent of her life's script written for her." More recently, however, some researchers and commentators have argued that most teenage mothers would not be better off had they delayed having children. Teenage childbearing, it is alleged, merely reflects growing up in disadvantaged circumstances. Poor teen mothers would still be poor even if they hadn't had their babies. While there is some merit to this argument, research suggests that having a baby as a teenager does add to the difficulties girls from disadvantaged backgrounds face.
Research on divorce also suggests that extreme views are inaccurate. But you wouldn't know it to read the latest report by Wallerstein and her colleagues on her long-term study of children of divorce. In 1971, she selected sixty families that had been referred by their attorneys and others to her marriage and divorce clinic in Marin County, California, shortly after the parents separated. Wallerstein kept in touch with the 131 children from these families. Her book on the first five years, Surviving the Breakup: How Children and Parents Cope with Divorce, written with Joan Berlin Kelly, contained insightful portraits of the difficulties the children faced as their parents struggled with the separation and its aftermath. Her book about how they were doing at the ten- and fifteen-year mark, Second Chances: Men, Women, and Children a Decade After Divorce, written with Sandra Blakeslee, became a bestseller. It chronicled the continuing problems that most of the children were having.
For her new book, she was able to talk to ninety-three of the children at the twenty-five year mark. Her striking conclusion is that most of these individuals, now 33 years old on average, have suffered greatly in adulthood. A minority have managed to construct successful personal lives, but only with great effort. The legacy of divorce, it turns out, doesn't fade away:
Contrary to what we have long thought, the major impact of divorce does not occur during childhood or adolescence. Rather, it rises in adulthood as serious romantic relationships move center stage. When it comes time to choose a life mate and build a new family, the effects of divorce crescendo.
Young adults from divorced families, Wallerstein writes, lack the image of an intact marriage. Because they haven't had the chance to watch parents in successful marriages, they don't know how to have one. When it comes time to choose a partner or a spouse, their anxiety rises; they fear repeating the mistakes of their parents. Lacking a good model, they tend to make bad choices. (In the realm of work, in contrast, Wallerstein's subjects had no particular problems.)
A woman who took the role of caregiver to a distraught parent or to younger siblings while growing up, for instance, may choose a man who needs lots of caring in order to function. But she soon finds his neediness and dependency intolerable, and the relationship ends. Wallerstein writes of one such woman in her study:
She described how she would come home after work and find her partner lying on the couch, waiting for her to take charge. It was just like taking care of her mom. At that point, she realized she had to get out.
Young men, Wallerstein tells us, were wary of commitment because they were afraid their marriages would end as badly as their parents' had. Many avoided casual dating and led solitary lives. She tells the story of Larry, who after courting and living with Grace for seven years still could not bring himself to marry her. Not until she packed up and left in frustration did he agree. He told Wallerstein:
I realized I loved her and that she was important to me but I was unable to make a decision. I was afraid because of the divorce. I was afraid of being left and I think that is why I was afraid of making a commitment to her.
Other children in the study turned to alcohol, drugs and, particularly among girls, early sexual activity. Wallerstein writes that sexual promiscuity was a result of girls' feelings of abandonment by their fathers. Their low self-esteem, their craving for love and their wish to be noticed led them to seek sexual liaisons and sometimes to start ill-conceived partnerships and marriages.
Overall, we are told, close to half the women and over one-third of the men were able to establish successful personal lives by the twenty-five-year mark--but only after considerable pain and suffering, much anxiety about repeating the mistakes of their parents, many failed relationships and, for one-third, psychotherapy. The rest were still floundering. Only 60 percent had ever married, compared with about 80 percent among all adults at their ages. Moreover, only one-third had children, as if they were afraid of doing to children what had been done to them.
Without doubt, a disturbing picture. And what makes it even more disturbing is Wallerstein's claim that her subjects are more or less representative of the typical American middle-class family that undergoes a divorce. Her families were carefully screened, she assures us, so that the children were doing "reasonably well" at school and had been developmentally "on target" before the divorce. Nor were the families especially troubled before the breakup, she says. "Naturally," Wallerstein writes, "I wanted to be sure that any problems we saw did not predate the divorce. Neither they nor their parents were ever my patients."
This claim to have a sample of typical, not unduly troubled families is, however, contradicted by the extensive psychological problems that the parents displayed when they were assessed at the initial interview. But you won't find that information in this book or the previous one. Only in the appendix to her first book, Surviving the Breakup, in 1980, does Wallerstein discuss the parents' mental states. There we learn the startling information that 50 percent of the fathers and close to half the mothers were "moderately disturbed or frequently incapacitated by disabling neuroses or addictions" when the study started:
Here were the chronically depressed, sometimes suicidal individuals, the men and women with severe neurotic difficulties or with handicaps in relating to another person, or those with long-standing problems in controlling their rage or sexual impulses.
And that's not all: An additional 15 percent of the fathers and 20 percent of the mothers were found to be "severely troubled during their marriages." These people "had histories of mental illness including paranoid thinking, bizarre behavior, manic-depressive illnesses, and generally fragile or unsuccessful attempts to cope with the demands of life, marriage, and family."
Typical American middle-class families? Hardly. These were by and large troubled families of the kind one might expect to come to a divorce clinic for therapy. Why this information was excluded from the nine-page appendix on the research sample in the new book--why an interested reader can only find it buried in the appendix of a book written twenty years ago--is puzzling. Does Wallerstein now consider this information to be in error? Irrelevant? Or just embarrassing?
The problem for Wallerstein is that troubled families often produce troubled children, whether or not the parents divorce. So it may be a considerable overstatement to blame the divorce and its aftermath for nearly all the problems she saw among her children over the twenty-five years. In a study of the records of several thousand British children who were followed from birth to age 33, Lindsay Chase-Lansdale, Christine McRae Battle and I found that children whose parents would later divorce already showed more emotional problems at age 7 than children from families that would remain together. The gap widened as the divorces occurred and the children reached adulthood, suggesting that divorce did have a detrimental long-term effect on some of them. But a large share of the gap preceded the divorces and might have appeared even had the parents stayed together.
Sensitive to the particularities of her sample, Wallerstein recruited a "comparison sample" of adults from nondivorced families. The comparison sample, we are told, was selected to match the socioeconomic level of the families in the study. In many respects, the individuals in the comparison group were doing better than the study's children, which Wallerstein presents as evidence that divorce really is the cause of the difficulties in the latter group. But since the comparison sample presumably was not matched on the parents' chronic depression, suicidal tendencies, problems in controlling rage, bizarre behavior and manic-depressive illness, their inclusion does not prove Wallerstein's case.
What, then, can we take from Wallerstein's study? It is an insightful, long-term investigation of the lives of children from troubled divorced families. It gives us valuable information on what happens to children when things go wrong before and after a divorce. And things sometimes do go wrong: Many divorcing parents face the kinds of difficulties that Wallerstein saw in her families. Her basic point that divorce can have effects that last into adulthood, or even peak in adulthood, is valid. She was one of the first people to write about children who seemed fine in the short-term but experienced emotional difficulties in adolescence or young adulthood--in her previous book she called this the "sleeper effect"--and now she is the first to describe it in detail among adults who have reached their 30s. Psychotherapists, social workers, teachers and other professionals who see troubled children of divorce and their parents will find her analyses instructive. Parents and children who are struggling with divorce-related problems will find her analyses helpful.
But no one should believe that the negative effects of divorce are as widespread as Wallerstein claims. Some portion of what she labels as the effects of divorce on children probably wasn't connected to the divorce. And the typical family that experiences divorce won't have as tough a time as Wallerstein's families did. Parents with better mental health than this heavily impaired sample can more easily avoid the worst of the anger, anxiety and depression that comes with divorce. They are better able to maintain the daily routines of their children's home and school lives. Their children can more easily avoid the extremes of anxiety and self-doubt that plague Wallerstein's children when they reach adulthood.
What divorce does to children is to raise the risk of serious long-term problems, such as severe anxiety or depression, having a child as a teenager or failing to graduate from high school. But the risk is still low enough that most children in divorced families don't have these problems. In the British study, we found that although divorce raised the risk of emotional problems in young adulthood by 31 percent, the vast majority of children from divorced families did not show evidence of serious emotional problems as young adults.
Except for Wallerstein, many of the writers most concerned about divorce now appear to recognize this distinction. Barbara Dafoe Whitehead, who wrote the "Dan Quayle Was Right" piece in The Atlantic (drawing heavily on Wallerstein's earlier work), acknowledged in a more recent, book-length treatment, The Divorce Culture, that a majority of children probably aren't seriously harmed in the long term. But she argued that even if only a minority of children are harmed, divorce is so common that a "minority" is still a lot. And she is correct. Divorce is not a problem that "dramatically weakens and undermines our society," but it nevertheless deserves our attention.
For that reason, some of the remedies Wallerstein suggests would be useful: creating more support groups in schools for children whose parents are divorcing, insuring that divorced fathers contribute to the cost of their children's college education and educating newly separated parents about how to shield their children from conflict. Measures such as these would help some children without imposing undue strain on parents, schools or the courts.
Less clearly useful is Wallerstein's recommendation that parents in unhappy, loveless, but low-conflict marriages consider staying together for the sake of their children. I think she is probably right that children can develop adequately in "good enough" marriages that limp along without an inner life of love and companionship. There were millions of these marriages during the baby-boom years of the 1950s, when wives weren't supposed to work and women were forced to choose between having a career and being a mother. The result was often frustration and depression. Few people (not even Wallerstein) want to constrain women's choices again. Certainly, unhappy parents have an obligation to try hard to change an unsuccessful marriage before scuttling it. Without doubt some parents resort to divorce too hastily. But no one as yet has a formula that can tell parents how much pain they must bear, how much conflict to endure, before ending a marriage becomes the better alternative for themselves and their children.
Least defensible is the attempt by Wallerstein to inform readers whose parents have divorced that their problems with intimacy stem from the breakup. In high self-help style, Wallerstein tells her readers:
You were a little child when your parents broke up, and it frightened you badly, more than you have ever acknowledged.... When one parent left, you felt like there was nothing you could ever rely on. And you said to yourself that you would never open yourself to the same kinds of risks. You would stay away from loving. Or you only get involved with people you don't care about so you won't get hurt. Either way, you don't love and you don't commit.
And so forth. Wallerstein plants the seed of the pernicious effect of exposure to divorce as a young child--and then waters it. Yes, the reader thinks, that must be why I'm so anxious about getting married. Never mind that making a commitment to marry someone is anxiety-producing for young adults from any background. Or that we live in an era when the average person waits four to five years longer to marry than was the case a half-century ago. Wallerstein encourages readers to believe that most of their commitment problems stem from their parents' divorces. But parental divorce isn't that powerful, and its effects aren't that pervasive. To be sure, it raises the chances that children will run into problems in adulthood, but most of them don't. Unfortunately, that's a cover line that doesn't sell many magazines.
Judith Butler, who is a Maxine Elliot Professor of Rhetoric and Comparative Literature at the University of California, Berkeley, is a troublemaker. She announced as much when she arrived on the critical feminist scene in her second and most well-known work, Gender Trouble: Feminism and the Subversion of Identity, first published in 1990:
Contemporary feminist debates over the meanings of gender lead time and again to a certain sense of trouble, as if the indeterminacy of gender might eventually culminate in the failure of feminism. Perhaps trouble need not carry such a negative valence. To make trouble was, within the reigning discourse of my childhood, something one should never do precisely because that would get one in trouble. The rebellion and its reprimand seemed to be caught up in the same terms, a phenomenon that gave rise to my first critical insight into the subtle ruse of power: The prevailing law threatened one with trouble, even put one in trouble, all to keep one out of trouble. Hence, I concluded that trouble is inevitable and the task, how best to make it, what best way to be in it.
In the 149 dense pages that follow this preface, Butler took on a host of psychoanalytic theorists, from "Freud and the Melancholia of Gender" to "Lacan, Riviere, and the Strategies of Masquerade." She also critiqued "The Body Politics of Julia Kristeva" (who uses semiotics in the service of psychoanalytic critique) and "Monique Wittig: Bodily Disintegration and Fictive Sex," whose The Lesbian Body and other works are, according to Butler, limited by Wittig's humanism. In Gender Trouble, Butler's admiration is reserved for Michel Foucault, the openly gay philosopher of power most famous for his History of Sexuality and Discipline and Punish, a philosopher whose terms are evident in Butler's preface above: "the reigning discourse of my childhood"; "rebellion and its reprimand seemed to be caught up in the same terms"; "the subtle ruse of power." Butler's genealogical critique of gender, i.e., a critique of gender's very origins, a critique of the very terms of the critique, was a grand synthesis of the most radical European ideas about sexuality and sexual identity. Simone de Beauvoir's famous statement in The Second Sex that one is not born but rather becomes a woman is a conceptual starting point, but only a starting point. Foucault's work on the journals of Herculine Barbin, a nineteenth-century hermaphrodite so tortured by his/her predicament in a sexually normative world that s/he commits suicide, enables Butler's challenge not only to the categories of gender but to the categories of sex itself. But what stands head and shoulders above Butler's illustrious collection of radical theories is Gender Trouble's overarching claim that gender, and possibly even sex itself, is not an expression of who one is but rather a performance.
Toward the end of Gender Trouble, Butler poses a set of questions that indicate the practical, political direction of her critique:
What performance where will invert the inner/outer distinction and compel a radical rethinking of the psychological presuppositions of gender identity and sexuality? What performance where will compel a reconsideration of the place and stability of the masculine and the feminine? And what kind of gender performance will enact and reveal the performativity of gender itself in a way that destabilizes the naturalized categories of identity and desire?
Not only did Gender Trouble immediately appear on feminist-theory syllabuses around the country, it became a foundational text of queer theory. Is it any wonder it provoked a backlash?
Antigone's Claim: Kinship Between Life and Death is a slender, very well-written book that is the published version of the Wellek Library Lectures Butler gave at the University of California, Irvine, in May 1998. Butler starts out:
I began to think about Antigone a few years ago as I wondered what happened to those feminist efforts to confront and defy the state. It seemed to me that Antigone might work as a counterfigure to the trend championed by recent feminists to seek the backing and authority of the state to implement feminist policy aims. The legacy of Antigone's defiance appeared to be lost in the contemporary efforts to recast political opposition as legal plaint and to seek the legitimacy of the state in the espousal of feminist claims.
Butler's study of Antigone led her someplace she had not anticipated. Rather than view Antigone as the figure who defies the state in the person of her uncle, Creon the King, who has forbidden her to bury her brother Polyneices--"I say that I did it and I do not deny it"--Butler follows some of her own most important intellectual mentors, namely, the Enlightenment philosopher and founder of dialectics, Georg Wilhelm Friedrich Hegel, and the poststructuralist psychoanalysts Jacques Lacan and Luce Irigaray, in viewing Antigone "not as a political figure, one whose defiant speech has political implications, but rather as one who articulates a prepolitical opposition to politics, representing kinship as the sphere that conditions the possibility of politics without ever entering into it." Butler is interested in Antigone as a liminal figure between the family and the state, between life and death (this is the choice she must make, and in her defiance of Creon she chooses the latter), but also as a figure, like all her kin, who represents the nonnormative family, a set of kinship relations that seems to defy the standard model.
In addition, there is a contemporary occasion for Antigone's Claim, one that is elucidated in Butler's new preface to the tenth-anniversary edition of Gender Trouble, in which she declares her interest in "increasing the possibilities for a livable life for those who live, or try to live, on the sexual margins." I do not think it amiss to describe Antigone's Claim as dedicated to those who try to die on the sexual margins. Though directly referred to only occasionally in her text, it is the specter of death as a result of AIDS that haunts Antigone's Claim, and the particular dilemma AIDS presents to those who live and die outside the boundaries of normative family and kinship relations. Toward the end of the third and final chapter, "Promiscuous Obedience," Butler states:
For those relations that are denied legitimacy, or that demand new terms of legitimation, are neither dead nor alive, figuring the nonhuman at the border of the human. And it is not simply that these are relations that cannot be honored, cannot be openly acknowledged, and cannot therefore be publicly grieved, but that these relations involve persons who are also restricted in the very act of grieving, who are denied the power to confer legitimacy on loss.
The outlines of the troubled Theban family are well-known. Oedipus Rex, actually written after Antigone (442 BCE) though its action precedes it, begins with the problem of a plague. As a priest informs us:
A blight is on the fruitful plants of the
A blight is on the cattle of the fields,
a blight is on our women that no children
are born to them; a God that carries fire,
a deadly pestilence, is on our town,
strikes us and spares not, and the house
is emptied of its people while black
grows rich in groaning and in
Soon, of course, we learn what the trouble is, when the blind seer Teiresias informs Oedipus the King, "You are the land's pollution." Unwittingly, the man has murdered his own father during an altercation at a crossroads, wedded his own mother and produced four offspring who are in fact his half-siblings. This unbearable truth causes his wife and mother Jocasta to hang herself in the polluted bedchamber, where afterward Oedipus tears the brooches from her robe in order to blind his own eyes. Toward the end of Antigone's Claim, Butler raises an issue that supports my reading of the book's contemporary occasion: "Consider that the horror of incest, the moral revulsion it compels in some, is not that far afield from the same horror and revulsion felt toward lesbian and gay sex, and is not unrelated to the intense moral condemnation of voluntary single parenting, or gay parenting, or parenting arrangements with more than two adults involved (practices that can be used as evidence to support a claim to remove a child from the custody of the parent in several states in the United States)."
In Oedipus at Colonus (401 BCE), the middle play of the trilogy but written last, an old, blind Oedipus is led onstage by his daughter Antigone. (Sigmund Freud, who did so much for the Oedipus myth, referred at the end of his life to his daughter and fellow psychoanalyst Anna Freud as his "Antigone.") Here, the theme of proper burial, which is so important a theme in Antigone and in Antigone's Claim, receives advance treatment. Oedipus begs of Theseus, King of Athens, a proper burial when he dies, that Theseus accept "the gift" of his "beaten self: no feast for the eyes." The oracle has prophesied that if Oedipus's sons do not tend his corpse, Thebes will be conquered by Athens, and Oedipus wants revenge on his sons because they drove him into exile from Thebes. When Polyneices makes an appearance toward the end of Oedipus at Colonus, Oedipus not only rejects his son's plea to join his side against his other son, presently in possession of Thebes, he curses them both; a curse that comes to pass between the action of Oedipus at Colonus and Antigone, when in battle both brothers die at once on the other's sword. Polyneices' final words in the trilogy are spoken at the end of Oedipus at Colonus to his beloved sister Antigone, to whom he offers a blessing if she will honor his corpse with burial rites. And here we have arrived at Antigone and Antigone's Claim.
From the start of her career, Judith Butler has been on a quest for a theory of the subject that might work for "those who live, or try to live, on the sexual margins." As she stated in her new preface to the recent reissue of her first book, Subjects of Desire: "In a sense, all of my work remains within the orbit of a certain set of Hegelian questions: What is the relation between desire and recognition, and how is it that the constitution of the subject entails a radical and constitutive relation to alterity?" Hegel's Phenomenology of Spirit has underwritten most of Butler's work, as has the work of Lacan, whose seminar on "The Ethics of Psychoanalysis" is the other major influence on Antigone's Claim. The book both follows from Butler's earlier work and turns in some interesting new directions; namely, it moves explicitly into the realm of ethics and implicitly into practical politics.
While Butler has tended in the past to focus particularly on the section of the Phenomenology of Spirit that deals with the famous "Lordship and Bondage" relation, in Antigone's Claim she makes what seems like an inevitable advance in the text, given the confluence of her present interests, into the section of the Phenomenology that deals with "the true Spirit: The ethical order." In this part, Hegel argues that it is the "Family" that "as the element of the nation's actual existence...stands opposed to the nation itself; as the immediate being of the ethical order, it stands over against that order which shapes and maintains itself by working for the universal; the Penates [household gods] stand opposed to the universal Spirit." For Hegel, it is woman who is associated with these household gods that stand opposed to the universal Spirit or the state; it is woman who is associated with the divine, as opposed to the human, law. The figure of Antigone upholds the divine law when she buries her brother Polyneices (twice) in defiance of her uncle Creon, who has ordered that the corpse of a man who threatened the integrity of the state will be left to rot in the sun, torn by beasts and birds.
Butler's affinities with a philosophical tradition arising from Hegel--the Frankfurt School of neo-Marxist philosophers and social critics (though she rarely if ever refers to them in her work)--are not limited to her use of difficult language, which notoriously won her a Bad Writing Award from the journal Philosophy and Literature. Butler shares with the Frankfurt School a fundamental, one might say foundational, debt to the Hegelian dialectic, which Marx harnessed in his theories of history. Hegel explains his dialectic in the Preface to the Phenomenology of Spirit:
Knowledge is only actual, and can only be expounded, as Science or as system; and furthermore, that a so-called basic proposition or principle of philosophy, if true, is also false, just because it is only a principle. It is, therefore, easy to refute it. The refutation consists in pointing out its defect; and it is defective because it is only the universal or principle, is only the beginning. If the refutation is thorough, it is derived and developed from the principle itself, not accomplished by counter-assertions and random thoughts from outside.
The Hegelian dialectic is a philosophical tradition a classical liberal humanist like Martha Nussbaum does not, apparently, have much sympathy for. It's unfortunate that Nussbaum did not take on this philosophical difference in attacking Butler in The New Republic last February. Instead of accepting the work as being of a tradition "that seeks to provoke critical examination of the basic vocabulary of the movement of thought to which it belongs," in Butler's self-characterization, Nussbaum isolates her as a philosopher:
Butler gains prestige in the literary world by being a philosopher; many admirers associate her manner of writing with philosophical profundity. But one should ask whether it belongs to the philosophical tradition at all, rather than to the closely related but adversarial traditions of sophistry and rhetoric.
According to Nussbaum, Butler is a "new symbolic type" of feminist thinker, influenced by a lot of French "postmodernist" ideas. In Nussbaum's vision, Butler is the Pied Piper of academia, traipsing off with all the "young feminists" behind her. Not only does Nussbaum claim that Butler's ideas are philosophically soft (if they are even philosophy at all), but she claims that Butler is leading a trend away from engaged feminism, having traded "real politics" for "symbolic verbal politics." The "new feminism" of Judith Butler "instructs its members that there is little room for large-scale social change, and maybe no room at all." From here, Nussbaum stoops to condescension ("In public discussions, she proves that she can speak clearly and has a quick grasp of what is said to her") and, ultimately, after several swipes at Butler's "sexy acts of parodic subversion," to the astonishing claim that Butler "purveys a cruel lie, and a lie that flatters evil by giving it much more power than it actually has": Butler's "hip quietism," according to Nussbaum, "collaborates with evil."
In Antigone's Claim, it is not only Antigone's public grief over Polyneices and her insistence that she bury him that absorbs Butler's interest but also the way in which her defiance of Creon, her condemnation to death and the taking of her own life (like her mother, Jocasta, she hangs herself) "fails to produce heterosexual closure for that drama"--if Antigone had complied, she would have married Creon's son and presumably become a mother. This, Butler claims, "may intimate the direction for a psychoanalytic theory that takes Antigone [as opposed to Oedipus] as its point of departure," namely, a psychoanalytic theory that would step outside the confines of compulsory heterosexuality.
And yet Butler's attraction to this particular family drama goes further back. While for Freud and for Lacan after him the Oedipal drama is a paradigm that in various ways instates, by way of prohibition, normative heterosexuality and kinship relations, Butler views this drama differently. In its deviations from the law and in its apparent need for prohibition, the most famous Theban family represents not just the predicament of those who live on the sexual margins but in a more historical sense, the family and kinship relations of our times:
Consider that in the situation of blended families, a child says "mother" and might expect more than one individual to respond to the call. Or that, in the case of adoption, a child might say "father" and might mean both the absent phantasm she never knew as well as the one who assumes that place in living memory. The child might mean that at once, or sequentially, or in ways that are not always clearly disarticulated from one another. Or when a young girl comes to be fond of her stepbrother, what dilemma of kinship is she in? For a woman who is a single mother and has her child without a man, is the father still there, a spectral "position" or "place" that remains unfilled, or is there no such "place" or "position"?... And when there are two men or two women who parent, are we to assume that some primary division of gendered roles organizes their psychic places within the scene, so that the empirical contingency of two same-gendered parents is nevertheless straightened out by the presocial psychic place of the Mother and Father...that every psyche must accept regardless of the social form that kinship takes?
Butler sees in the Oedipal story an allegorical reflection of things as they presently are; what if, rather than prohibiting such things, we took them as our starting point; what if we accepted the nonnormative? Second, Butler wants to move the fulcrum of the drama a generation forward because Antigone occupies a position not only between life and death, and not only between private and public, between the family and the state: Antigone figures for Butler a desirable transition into the world of ethics that does not forget familial origins. This is made clear in Antigone's extended exit speech, one Butler focuses especially on. On the point of being led away to her death, Antigone argues that her brother Polyneices is irreplaceable and therefore had to be honored by her even though it means her own death. A husband or child could have been replaced, but since her parents are no longer alive, not a brother.
What is the law that lies behind these
One husband gone, I might have found
or a child from a new man in first child's
but with my parents hid away in death,
no brother, ever, could spring up for me.
Such was the law by which I honored
In this speech one senses that Antigone is finally at peace. For she, like the rest of her family, is characterized as much by her personal moral sense as she is by her strange kinship predicament. And one senses in Butler's interest in these lines a homage to those who have lived, or have tried to live, and to those who have died "on the sexual margins."
A few years back, critics of postmodernism, both left and right, chuckled at the academic sting pulled on the journal Social Text when it published Alan Sokal's bogus article on the socially constructed nature of nature. For conservatives, that the journal ran Sokal's fuzzy call for a progressive postmodern science confirmed the fundamental divide between the politicized humanities and the objective sciences--proof positive of cultural studies run amok. In all the discussion that followed, however, little notice was paid to the origins of post-World War II radical critiques of science. In the shadow of Hitler and Stalin and in the wake of the Vietnam War, theorists from Theodor Adorno to Donna Haraway have been concerned with the ways in which science has colluded with acts of barbarism.
Patrick Tierney's Darkness in El Dorado examines the tragic consequences of medical and social science research on the Venezuelan Yanomami and reminds us why scientific practices and theories should indeed be the domain of social critics. White scientists in the jungle have long been central characters in the stories the West tells about itself. Alongside Humboldt and Mengele, Tierney's book now adds to the tropical pantheon James Neel, founder of the University of Michigan's human genetics department, and Napoleon Chagnon, perhaps the world's most infamous living anthropologist.
Well before Darkness's publication, Tierney's most damning charge--that Neel and Chagnon provoked, perhaps knowingly, a fatal 1968 measles epidemic responsible for "hundreds, perhaps thousands" of deaths--has created a scandal that threatens to distract from the real significance of his research. The Chronicle of Higher Education reported that the book may create a crisis "unparalleled in the history of anthropology." At a special American Anthropological Association forum in mid-November, defenders of Neel charged libel and politicized agendas. One panelist proclaimed that Tierney's "anti-science views" would jeopardize future vaccine efforts and lead to more deaths from disease. Chagnon, evoking the terms of the Sokal affair, has responded that only "cultural anthropologists from the Academic Left" who "despise the words 'empirical evidence' would take Tierney's claims seriously."
Empirical evidence is not lacking in Tierney's copiously footnoted book. Like all good chronicles of Western rationalists who lose their mind among primitives, Darkness in El Dorado is filled with absurd and disgraceful behavior: a French anthropologist who loses himself for decades in a sexual Eden; the world's wealthy holding a tuxedo dinner catered by helicopters on a jungle mountain; researchers who try to kill one another with machetes or commit suicide after being spurned by a Yanomami lover. But aside from his Joseph Conrad-like musings as to what it is about the Yanomami that made white people crazy, Tierney has written a fascinating, but also frustrating, ethnography of the practices and beliefs of cold war medical and social science researchers.
Tierney focuses primarily on the long and strange career of Napoleon Chagnon, who originated the myth of Yanomami aggression in his book The Fierce People, the all-time-bestselling ethnography. Chagnon portrayed the Yanomami as one of the most violent cultures on earth, where villages went to war to procure women and serial murderers bred at a higher rate than men who did not kill.
Tierney convincingly demonstrates his charge that unethical methodology and false science produced this myth. He also describes its often fatal consequences.
Most cultural anthropologists now believe that the wars Chagnon witnessed were provoked by Chagnon himself. He offered axes, machetes, fishhooks and pots in exchange for ethnographic information, creating tensions among villages that vied for monopoly control of his wares. Within months of Chagnon's arrival in 1964, three different fights broke out between villages that had previously been at peace for decades. Anthropologist Brian Ferguson reports that Chagnon was "very much involved in the fighting and the wars. Chagnon becomes a central figure in determining battles over trade goods and machetes."A Yanomami reports that Chagnon offered him an outboard motor in exchange for help, including the procurement of a Yanomami wife. Shotguns, a seemingly unlimited supply of trade goods and willingness to don feathers, face paint and a loincloth allowed Chagnon to transform himself from an "impoverished Ph.D. student at the bottom of the totem pole to being a figure of preternatural power."
Tierney argues that many of Chagnon's data are simply false. The Yanomami do not have a particularly high murder rate, nor do men who kill reproduce more than those who don't. Neither are the Yanomami particularly well-nourished--a claim that Chagnon uses to argue that men fight over women and not food.
In the United States, Chagnon and his sociobiologist allies continue to portray the Yanomami as an untainted relic of our past--a handy control group used to prove the biological basis of a range of aggressive human traits. In Latin America, the endurance of the myth of Yanomami aggression has reinforced racism and justified indifference. Both the Venezuelan and Brazilian governments have used unfavorable images of the Yanomami to justify their failure to protect them from migrants, who, starting in the late 1980s, increasingly entered the region, resulting in the death from disease and violence of untold numbers of Yanomami.
Tierney is at his best when he discusses Chagnon's career within the cultural history of the cold war. Born poor in Michigan, Chagnon used the expanding university system to climb out of poverty. Like many at the time who through discipline and hard work improved their class standing, Chagnon developed a visceral antipathy toward communism. It manifested itself in an intense masculine persona that earned Chagnon a reputation for barfighting and academic brawling. One of Tierney's insights is that Chagnon's theories had their "genesis during the Vietnam War and its cultural equivalent on the University of Michigan's Ann Arbor campus, where hippies in tepees chanted slogans like 'Make love, not war.' The whole point...was that you had to make war in order to make love--that violence was part of the natural order.... As a cold war metaphor, the Yanomami's 'ceaseless warfare' over women proved, that even in a society without property, hierarchies prevailed."
Tierney is on to something important here. The Fierce People was published in 1968, a particularly tough year for the United States abroad. American officials justified counterinsurgency campaigns that were taking place in the jungles of Latin America, Africa and Asia in decidedly Chagnonian terms. As one 1968 dissenting State Department memo put it: "We have condoned counter-terror.... We suspected that maybe it is a good tactic, and that... murder, torture, and mutilation are alright if our side is doing it and the victims are communists. After all hasn't man been a savage from the beginning of time so let us not be too queasy about terror. I have literally heard these arguments from our people."
Tierney rightly reads The Fierce People as a piece of home-front propaganda. To counter those who argued that war was caused by struggles over resources (a central claim of New Left interpretations of both the cold war and the Vietnam War), Chagnon "engineered a bold creation myth, a ferocious Garden of Eden, where the healthy, well-fed Yanomami fought for... sexual pleasure.... It was not the Yanomami but Chagnon's fellow Americans who belonged, in reality, to one of the best-fed, healthiest societies in history. America enjoyed abundance so delirious that it seemed, for a short time in the 1960s, that its citizens would not agree to the stress of world combat against Communism.... At that critical moment, The Fierce People... came to reverse a dangerous complacency, proof that the battle is never won, that the fight can never be abandoned."
By the late 1980s Chagnon was in trouble. Tierney misses an important opportunity to discuss how the decline in Chagnon's fortunes was tied to the end of superpower tensions. At home, a generation of anthropologists critical of its discipline's role in justifying US foreign policy came into professional power. In Venezuela his former research subjects were demanding that he be barred from entering their territory. And reflecting the post-cold war extension of economic activity into areas previously off-limits, gold miners poured into the Amazon, causing widespread ecological destruction and social dislocation. Challenged by his liberal colleagues, harangued by feminists, threatened by dark-skinned peoples and adrift in the new post-cold war economy, Chagnon became an international version of the angry white man.
Chagnon did what many did at the end of the cold war--he went private. He teamed up with a flamboyant Venezuelan industrial gold miner, who turned "tracts of forest into mud soup," and the mistress of the Venezuelan president, who has since fled the country following indictments for corruption and fraud. The three came close to establishing a private biosphere in Yanomami territory that would have given them political authority over the Yanomami and monopoly rights over mineral and scientific claims. In order to muster international support for their scheme, they shuttled journalists and scientists in and out of remote Yanomami communities on lightning helicopter tours, without providing protection against possible contagion. Newspapers and television news ran stories of recently discovered "lost villages," while "foreign scientists carried out huge amounts of plant and animal samples."
When Venezuelan and international opposition scuttled his plan to set up a fiefdom in his former field site, Chagnon, now largely shut out of anthropology journals, stepped up efforts to disseminate his theories in the popular press. Although Chagnon often casts himself as an embattled truth-seeker--the preferred role of most biological determinists, no matter how much funding or open access to the media they have--Tierney points out the "abject admiration many male journalists apparently felt for the great anthropologist." He cites a fax that Matt Ridley, the science reporter at The Economist, sent to Chagnon apologizing for not writing a more sympathetic piece: "I have written it in the way that the International Editor wanted, which means 'impartially.' (She is a bit PC, herself.) So you may find it less unambiguously sympathetic to you than you might have hoped, but it is about as far as I dare go.... I do hope you like it."
What will make and, unfortunately, probably break Darkness in El Dorado is its description of the deadly 1968 outbreak of measles that coincided with the arrival of an expedition, funded by the Atomic Energy Commission and headed by Neel and Chagnon, to collect Yanomami blood samples.
Tierney's speculation that Neel may have been responsible for the epidemic is based on Neel's decision to use what was by 1968 an antiquated vaccine, Edmonston B, which was contraindicated for isolated populations such as the Yanomami. Tierney suggests that Neel chose this vaccine to prove that American Indians were not genetically vulnerable to European germs. Since Edmonston B produced the same level of antibodies as an infection of real measles, follow-up antibody tests would allow for a comparison of European and Yanomami immune systems. This may be why, according to Tierney, Neel opted for Edmonston B even though it was known to cause measleslike symptoms among isolated groups and even though a cheaper, safer vaccine (but one that did not produce antibodies comparable to the disease) was available. Tierney argues that because Edmonston B produces symptoms similar to measles, its use may have ignited the outbreak; he goes even further by proxy, citing a medical historian who ventures that Neel may have intentionally started the epidemic.
Tierney unfortunately has presented his case in a way that allows for easy dismissal. He provides compelling evidence that Neel and Chagnon did indeed treat the vaccination campaign as an experiment. For instance, by Neel's own telling, in the first village, before the epidemic, the team inexplicably vaccinated only forty Yanomami out of a total population of seventy-six, even though it had enough doses for all. Combined with the fact that most in this village had been tested for measles antibodies two years earlier, the inoculation of half the village created a fortuitous control group for Neel's published findings. It also seems that the vaccine did induce fevers and rashes in many Yanomami. Nevertheless, the fact that Tierney gives no direct evidence to back up his most serious conjecture--that the measles epidemic was caused by the vaccine--threatens to discredit his entire study. (Also, in response to the pre-publication controversy, most medical experts insist that it is impossible for a vaccine, no matter what symptoms it may bring on in the inoculated, to spread as an epidemic.)
Tierney's missteps here speak to a larger problem with his book, which draws its inspiration more from The X-Files than from the Frankfurt School. Tierney tries too hard to link the actions and motives of the individuals involved in a tight net of intrigue, misrepresenting cold war social science as a secret society of an elected few.
Of course, for many, the actions of the United States during the cold war don't make sense any other way. Consider this history: Neel, who did research on Hiroshima survivors, was funded by the Atomic Energy Commission to collect thousands of samples of Yanomami blood because it was thought it could be used as a baseline to measure degrees of genetic mutation. In 1958 the AEC, which in other instances engaged in deadly human radiation experiments, paid Marcel Roche, a Venezuelan doctor who worked on Neel's 1968 expedition, to inject the Yanomami, without their knowledge, of course, with radioactive iodine to study why they did not suffer from goiters. Tierney should not be entirely blamed if he didn't have a theory, other than conspiracy, to explain this.
Darkness in El Dorado unconvincingly attempts to trace this shameful history directly to Neel ("I felt that Neel was the key"), unfairly describing him as an extreme eugenicist. This is unfortunate, for Tierney could have written a more powerful book by demonstrating how the cold war produced acts of barbarism regardless of individual motive.
This is not to let Neel and Chagnon off the hook. They were instrumental in the creation of a body of knowledge that valued the Yanomami not for their own sake but for what they could provide cold war science. Their blood was believed to contain answers to questions raised by the new post-Hiroshima world, while their culture was thought to be a distilled version of what the West once was and, for some, should be again.
In the documentary made of the 1968 expedition, Neel and others are shown professionally inoculating Yanomami, who are presented as pictures of vibrant health. Sound outtakes reveal a different story. The team was exhausted, sick and panicked as the epidemic escaped their control and ravaged the Yanomami. Neel can be heard ordering the cameraman to stop filming a sick Yanomami. Whatever the cause of the measles outbreak, it is probable that the research team exposed the Yanomami to respiratory infections and other illnesses. The outtakes also reveal that Neel and Chagnon were much more concerned with making the documentary and collecting blood samples than with containing the epidemic. They broke quarantine lines to procure donors and quickly abandoned the area so that their blood would not be ruined in the tropical heat.
Tierney's effort to pin the tragic history of the Yanomami on Neel speaks to a larger problem, both in his book and in current ways of thinking about colonialism. With the failure of socialism and the discrediting of revolutionary movements and governments, many First World activists have thrown their energy into advocating on behalf of the cultural rights of native peoples. Much of this work is profoundly apolitical, justified more by appeals to Indian virtue than by critical analysis. This kind of activism too easily sets itself up for dismissal when it is revealed that Indians may have their own interests and may not be as innocent as portrayed.
This problem is reproduced in Tierney's book. It speaks to the poverty of our political culture that Tierney, an experienced investigative reporter, refuses, either out of ignorance or bias, to discuss the history of the Amazon in reference to colonialism, capitalism or racism. Instead, he searches for the mastermind behind the mayhem. Tierney creates a kitschy Heart of Darkness-like tale and casts himself as Marlow and Chagnon as Kurtz (Neel, perhaps, could be King Leopold). Well before we hear any Yanomami voices, we learn of Tierney's battles against jungle thieves and malaria, heroically rescuing Yanomami children and fending off evil gold miners.
Tierney's narrative rightly demonstrates how objective scientists can be implicated in a history of atrocity--and his gaffes should not distract from this history--but it can't account for the fact that while the AEC was paying for Neel's and Chagnon's jungle excursions, it was also funding the work of Harvard biologist Richard Lewontin, along with other progressive scientists and anthropologists. These scholars became powerful critics of how the supposed objective research of their colleagues served not-so-objective agendas and had not-so-benign consequences. These politicized scholars have served science well--proof positive that Adorno was right, that "science needs those who disobey it."
Facebook Like Box