Help

Nation Topics - Books and the Arts

Topic Page

Nation Topics - Books and the Arts

Subsections:

Arts and Entertainment Books and Ideas

Articles

News and Features

Say what you will against the Hollywood event film, and you can say it
twice about Spider-Man. Twice, because this movie has been so
successfully pre-sold, mall-booked, cross-marketed and revenue-streamed
that Columbia Pictures confidently scheduled Spider-Man 2 before
it ever let an audience see
the first. Violent? The fight scenes in this picture must have cost a
hundred Foley artists a hundred nights in the recording studio, banging
away at a hundred anvils. Crass? The product placements are literally as
big as Times Square. Crude? The camera is perpetually drawn, as if by
animal magnetism, to the cleavage of Kirsten Dunst, the better to
examine two of her character's few defining features. It is not enough
to say that Spider-Man is a big movie. It is a big, big movie.

And Spider-Man is also a small movie, which hangs from the thin,
very odd thread of its lead actor, Tobey Maguire. A little late in life,
though not implausibly so, Maguire plays high school senior Peter
Parker: the smart, shy, artistic, dateless victim of his graduating
class, the kid voted Most Likely Not to Be Voted Anything, who happens
to get bitten by a mutant spider and so turns into--what? A superhero?
More like a freak. As conceived for comic books by Stan Lee and Steve
Ditko, Spider-Man was the first really alienated guy to swoop around
fighting crime in a funny outfit. His strange powers made this teenage
outsider into even more of an outsider--and Spider-Man the movie
stays true to that idea, thanks mostly to Maguire.

Consider his voice, first of all: a nasal tenor instrument, with which
he's in no hurry to say anything. Maguire doesn't cultivate a stammer,
as did James Stewart (whom he occasionally calls to mind), but he does
give a consistent impression of letting his words trail a beat or so
behind his thoughts. You might recall his doing so in The Ice
Storm
(in which, for my money, he was the film's one point of
contact with reality), or in The Cider House Rules (where he was
used for his air of moping fragility, yet somehow held his own against
Michael Caine), or yet again in Wonder Boys (where Michael
Douglas and Robert Downey Jr. kept competing to see which one could play
more broadly, and Maguire very quietly and subtly took control of the
movie). It's characteristic of him that in one of his better moments in
Spider-Man, he says nothing at all. "Just got contacts?" asks MJ
(Dunst), the girl of Peter Parker's dreams, when she sees he's no longer
wearing glasses. The question sounds casual, but the occasion is
charged; MJ has noticed for the first time the color of Peter's eyes
(spider-power has corrected his vision), and he's just been granted his
first chance to look into hers. Maguire considers her question, pauses
as if a dozen possibilities were crowding his head and then settles on a
reply: He grins. It must be the right choice. At the screening I
attended, the audience answered his smile with laughter.

Maguire can get that effect because he generates a time zone of his own
around his body, and also because that body is a mismatch not only for
its surroundings but for itself. The carriage is stiff. The smile, when
granted, loops goofily up and down the long face. The features of that
face don't quite come together. Although the assertive cleft chin might
well belong to a superhero--or a movie star--it cohabitates a bit
uncomfortably with rosebud lips, a delicate nose and eyes whose natural
tendency is to watch for trouble. The impression, as a whole, is one of
pleasant ungainliness--which may be why Maguire seems as surprised as
the audience to discover what's happened to his musculature. When he
awakens after the spider bite, this 98-pound weakling finds that his
torso can bulge and ripple, just like something from an old Charles
Atlas ad.

The allusion to Charles Atlas seems deliberate on the part of the
director, Sam Raimi. He knows those ads had their rightful place on the
back covers of comic books, where they held out a fantasy of power to
the medium's core audience, the Peter Parkers of this life. That's
something comic books share with event movies; they're both made to
appeal to boys in their adolescence, or barely out of it. The
difference, of course, is that event movies mount their appeal by
deploying resources of a vastly greater scale, comparable (let's say) to
that recently used by the Pentagon in Afghanistan. Part of what I like
about Spider-Man is that despite its staggering budget and
daunting market clout, it stays in touch with the unpretentiousness of
the source material. Raimi uses Maguire for that purpose, and he also
uses a second, uncredited star: New York City.

To an extent that's very rare with digitized, semi-cartoon pictures,
Spider-Man is a movie shot on location. You see the Columbia
University campus, Midtown, the Flatiron district, SoHo, the East River
and (maybe most gratifying of all) the row houses and little commercial
streets of Queens. Very often the action that takes place in these
settings is computer-generated, with Spider-Man swinging from building
to building by his web, or performing the kind of acrobatics that were a
prime attraction of The Matrix. Even so, the real city remains an
irreducible presence in Spider-Man, as when Peter discovers his
new abilities and goes leaping across the rooftops in exhilaration--the
roofs, in this case, belonging to the same squat apartment buildings you
see every day from the elevated train.

So there's something humble, plain and slightly old-fashioned working
within this mega-movie--or perhaps even working against it. As I turn
from Maguire and the settings to the story and its themes, as elaborated
by screenwriter David Koepp, I notice that the conflict between big and
small is more than an accidental effect in Spider-Man. It's the
movie's substance.

The plot, in brief, concerns a surrogate father who happens to be an
all-powerful homicidal maniac. Norman Osborn (played by Willem Dafoe,
the movie's Michael Douglas and Robert Downey Jr. rolled into one) is a
millionaire scientist who at first befriends the impecunious Peter,
offering him concern and sympathy. But Norman is also a military
contractor who hungers for that next big contract, as a result of which
he undergoes his own transformation, developing a monstrous alter ego
known as the Green Goblin. Whereas Norman is kind and gentle toward
Peter, the Green Goblin schemes to destroy Spider-Man, striking at him
through the people he loves.

As someone who has been a son and is presently a father, I wasn't
convinced. Spider-Man tosses out a notion of the paternal
relationship, but it conveys nothing of the feeling of bone of my bone,
flesh of my flesh. (Paradoxically, the relationship between MJ and her
father has emotional weight, even though it's a side issue in the movie.
Her father bullies and belittles her--which may be why she takes a
liking to Peter. He's the one male animal she encounters who is strong
but doesn't act it.) But if we agree not to take the movie's terms more
seriously than they deserve, then the father-son conceit can be made to
yield some sense. Let's say the father is a stand-in for Columbia
Pictures, a Sony Pictures Entertainment Company, and the son is Sam
Raimi, who at one moment gets sweet talks and huge sums of money from
his corporate parent and at another is reminded, no doubt forcefully,
that the parent is in fact his master, who will kill for those revenue
streams.

Does this interpretation seem far-fetched? Then think about Peter's
Uncle Ben, the other surrogate father in the film and the movie's moral
voice. Raimi has waggishly cast Cliff Robertson in the role--no doubt
because Robertson, too, went through a life-altering, science-fiction
change in the movies, in Charly, but also perhaps because he was
the one who uncovered malfeasance at Columbia Pictures in the late 1970s
and so brought down its management. Robertson's mere presence in a new
Columbia release is a kind of history lesson, and a rebuke. Who better
to tell Peter, practically with his dying breath, that power brings
responsibility? Who better to play a wise, elderly working stiff from
Queens, in contrast to Dafoe's military-industrial tycoon?

And who can doubt that such a contrast is needed, when Spider-Man
portrays modern economic life as an endless series of downsizings? The
older people in the movie are pushed out of their jobs; the younger
can't get any. Why, the very notion of hiring someone seems repugnant to
the editor of the Daily Bugle (JK Simmons) when Peter comes
looking for work. "Freelance!" he bellows. That's the best thing for
young people today. Then, as a substitute for decent freelance pay, the
editor goes on to promise "Meat--Christmas meat!"

As an object of commerce, Spider-Man belongs to the world of the
Daily Bugle, and to the Green Goblin. As a work of the
imagination--as a movie, rather than a blockbuster--it belongs to Cliff
Robertson and Tobey Maguire, to New York City and to New York's people
(who put in a surprising, crucial mass appearance late in the film). I
liked seeing this conflict played out openly, in the first summer-season
mega-production of 2002. But that's not why I gave my heart to
Spider-Man.

What really moved me was the exchange between Peter and MJ at the end of
the film. It's a scene that comes out of nowhere, if you've ignored the
small movie within Spider-Man and seen only the product
placements and special effects. But if you've registered the moments of
wit and feeling that surface throughout the picture, intermittently but
steadily, you will feel that it's right for the movie to end here, in a
graveyard, with MJ at last caressing Peter's face and doing it with a
black-gloved hand. Finally she can speak of what she wants, amid death.
Peter wants to reply, and could do so eloquently; but, being Tobey
Maguire, he chooses to hold back.

And so it ends, triumphantly, unhappily--that is, until Spider-Man 2.

Nearly fifty years ago, in Eros and Civilization, Herbert Marcuse
suggested that homosexuals (then the current term) might
someday--because of their "rebellion against the subjugation of
sexuality under the order of procreation"--provide a cutting-edge social
critique of vast importance. Marcuse's prophecy may have come to pass.
Or so some are claiming.

There is mounting evidence that a distinctive set of values has
emerged among gay people (despite enormous variations in their
lifestyles) in regard to how they view gender, sexuality, primary
relationships, friendships and family. One even increasingly hears the
claim that gay "differentness" isn't just a defensible variation but a
decided advance over mainstream norms, that gay subcultural perspectives
could richly inform conventional life, could open up an unexplored range
of human possibilities for everyone. That is, if the mainstream
were listening, which it isn't.

The mainstream's antenna remains tuned to a limited number of
frequencies: that heterosexuality is the Natural Way; that (as we move
right of center) lifetime monogamous pair-bonding is the likeliest
guarantee of human happiness; that the gender binary (everyone is either
male or female and each gender has distinctive characteristics)
is rooted in biology. Those queers who look and sound like "normal"
people (or are at least able to fake it in public)--meaning, mostly,
well-mannered, clean-cut white men and lipstick lesbians--are
being welcomed into the mainstream in mounting numbers.

But the armed guards at the gates continue to bar admission to (as they
might put it) overweight butch dykes, foul-mouthed black queers or
dickless "men" and surgically created "women" delusionally convinced
that they're part of some nonexistent group called the "transgendered."
The mainstream somehow senses that the more different the outsider, the
greater the threat posed to its own lofty sense of blue-ribbon
superiority. Fraternizing with true exotics can prove dangerously
seductive, opening up Normal People to possibilities within themselves
that they prefer to keep under lock and key.

But what happens when "normal-looking" queers start asserting how
different from you they actually are--and start lecturing you about how
abnormal your own proclaimed normalcy is? Take, for example, the
arguments that David Nimmons puts forth in his new book, The Soul
Beneath the Skin
. His focus is on precisely those privileged urban
gay white men who, judged by outward demeanor, closely resemble
stereotypical heterosexual males; they don't look or act at all like
those phantasmagoric renegades, the transgendered. Yet according to
Nimmons, standard-issue gay males have birthed a strikingly different
(and, he claims, superior) set of personal ethics and community
institutions. These are guys, for God's sake, who hang out in gyms and
look like football players! Yet far from being your average macho Joes,
their subculture is, Nimmons claims, marked by "a striking range of
cultural innovations."

What are its chief identifying features? In the past, the question has
typically been answered by referencing a set of negative stereotypes
that emphasize an obsession with buffed bodies, drug-driven dancing
marathons, "circuit" parties of profligate sexual excess, a devotion to
consumerism that excludes politics and the life of the mind, and a
ruthless narcissism that denies entry to its playgrounds to all but
stunning young white male bodies reeking of Ecstasy and attitude.

In The Soul Beneath the Skin, Nimmons builds a strong
countercase, favorably contrasting gay male values with those associated
with heterosexual men. Urban gay life, for instance, is notable for the
absence of community violence. The gay male bar scene rarely spawns
shouting matches, brawls or an exchange of blows. Our dances, parades,
political rallies and marches are suffused with drama but nearly devoid
of ferocity.

We also have a high rate of volunteerism. According to one large-scale
study, the gay cohort volunteered 61 percent more time to nonprofit
organizations than did the heterosexual one--and divided its charitable
contributions nearly equally between gay and nongay causes. Gay men,
moreover, consistently score higher than straight men on studies that
attempt to measure empathy and altruism. We perceive
discrimination against others more readily than other men do, and we're
more likely to have friends across lines of color, gender, religion and
politics. It's telling that during the trial of Matthew Shepard's
murderers, nearly every leading national gay and lesbian organization
publicly opposed the death penalty. Cruelly treated for generations, we
practice tenderness and tolerance more than other oppressed minority
groups--who tend to treat us with contempt and disdain.

Nimmons also applauds the premium that many (though certainly not all)
gay men put on being emotionally expressive and sexually innovative--for
the compelling way we've reworked the rules governing erotic
exploration, friendship and coupledom. In regard to couples, he argues
that the community ideal (even if only approximated in practice) is one
of mutuality and egalitarianism--which again sets us apart from
stereotypical straight men, some of whom spout egalitarian rhetoric but
few of whom carry their fair share of domestic responsibilities.

I find much of what Nimmons has to say persuasive--indeed, a recent
British study, Same Sex Intimacies, by Jeffrey Weeks, Brian
Heaphy and Catherine Donovan, confirms gay male distinctiveness beyond
the borders of the United States. Still, I do have problems with some
aspects of Nimmons's argument. The most serious derive from his lack of
clarity about whether he's primarily defending the limited number of
urban, privileged, mostly white men who make up the gym/circuit/Fire
Island Pines crowd, or whether he's mounting a broader defense of gay
male culture as a whole.

He wobbles back and forth, though he finally does seem more interested
in sticking up for the small circuit set than in burnishing the image of
the general gay male community. In my view, though, the distinctive set
of values that he catalogues more justly apply to the latter than the
former. I've made dozens of trips over several decades to the Pines, for
example, and can say only that Nimmons's description of it as "a form of
queer kibbutz" where "an easy male affection suffuses the air" is
wildly at odds with my experience of it as a smug, fatuously snotty
watering hole for the very rich or very beautiful.

I also think that Nimmons overdraws the contrasts between gay and
straight men and overcredits our "stunning cultural accomplishment[s]."
After all, Hugh Hefner made some contribution to the "erotic
innovations" that so enthrall Nimmons. And experimental patterns in
sexuality and relating date back at least to the countercultural 1960s
(not to mention the nineteenth-century Oneida community, the Bloomsbury
crowd or the bohemian Greenwich Village of the 1920s). Nimmons also
minimizes the notable shifts in attitude that characterize today's
younger generation of heterosexuals. In simplistically insisting that
"the icy winds of sexual repression...have swept across the
[heterosexual] American landscape," Nimmons fails to understand how
broadly attitudes about sex and gender have shifted, especially in urban
areas, as traditional notions of what constitutes a "family" or a
"viable" relationship come under increasing scrutiny.

Nimmons is better at delineating gay male distinctiveness than
accounting for it. He establishes the fact of gay male peaceableness,
for example--and does so with style and verve--but he's of little help
in explaining it, other than to remark in passing that "gay men might be
biologically a gentler species of male." But it seems to me far more
likely that our nonviolent behavior originates in our historical
experience. Having been subjected for generations to gay-bashing and
police brutality, we've learned, out of prudence and fear, not to let
our anger show in public. Tellingly, it does show in private: The
rate of domestic violence among both gay men and lesbians ranks right up
there with heterosexual violence. (The latest of many studies to confirm
that is No More Secrets, by Janice Ristock.) We're not devoid of
rage; we're unwittingly passive-aggressive, taking out the aggressive
side in the comparative safety of our homes--or on ourselves, through
the abuse of alcohol and drugs.

But Nimmons, prone to inspirational excess (as when he writes about "the
centrality of bliss and play in our lives"--sure, try telling that to
the legions of poor gay people), is impatient with introspection. He
sneeringly refers, at one point, to "the reigning queer academic
chatter--uh, sorry, discourse," showing no awareness of how much queer
(and feminist) theory has contributed to the "new culture" whose virtues
he trumpets.

Besides, he has ideological allegiances of his own, though he reveals
them off-handedly. Phrases like "hard-wired," "essential components" and
"innate tendency" are sprinkled throughout Soul, tipping
Nimmons's deterministic hand. They're sprinkled, not boldly embraced,
and Nimmons frequently inserts a tepid disclaimer to protect his flank:
"There is much to argue with in any strict sociobiological view," he
says at one point, but never tells us how much. He even drops in
a little spiritualist fairy dust now and then, as when suggesting that
those involved in the party circuit are, in their pursuit of "rapture"
and "bliss," direct descendants of "ancient shamans."

No, we have to look elsewhere for deeper insight into the origins and
significance of the gay male version of masculinity. I have two offbeat
candidates in mind: Talmudic studies and relational psychoanalysis. The
towering figure in Talmudic studies these days is Daniel Boyarin of the
University of California, Berkeley. His 1997 book Unheroic
Conduct
is a work of immense importance, all at once astonishingly
erudite, witty, playful and boldly speculative. As its reputation
spreads, it's beginning to roil the waters far beyond Talmudic studies.

Boyarin's basic thesis--though this summary won't do justice to its
supple byways--is that traditional Ashkenazic Jewish culture produced,
in opposition to the Roman model of the powerful, aggressive, violent
warrior, a cultural ideal of masculinity that valorized gentleness,
nurturance, emotional warmth, nonviolence, inwardness and studiousness.
These characteristics were associated with sexual desirability, not
sexlessness--in contrast to the somewhat comparably pacific early
Christian model of maleness associated with the desexualized St.
Francis. This doesn't mean, Boyarin emphasizes, that orthodox Ashkenazic
culture was sympathetic to women (who were excluded from power) or to
homoeroticism (though male sexual attraction to other males does not
seem to have been considered abnormal).

By the nineteenth century, the now stereotypical figure of the
"feminized" Jewish man had become, in the minds of many Jews, a
roadblock to assimilation; a successful effort (joined by Freud and
Theodor Herzl, among others) was made to discredit the once-privileged
model of a gentler, more nurturant masculinity as either the
pathological product of the Diaspora or a figment of the anti-Semitic
imagination.

Boyarin wants to reclaim the earlier tradition. He believes, and I'd
agree, that restoring the once-revered model would greatly help to
destabilize binary notions of gender, would emancipate men and women
from roles that currently constrict their human possibilities. The
critical recovery of the past would, in Boyarin's words, make for the
redemption of the future. The implications of Boyarin's work are
breathtaking. By reclaiming a radically different--and socially
constructed--model of masculinity, he wreaks havoc with simplistic
biological determinism and offers us a previously unsighted path toward
social change.

As a champion of the gentle, inward male, Boyarin has to confront the
macho muscularity of the circuit culture, and he does so in a typically
nuanced way. Himself an openly gay man, Boyarin has no trouble
appreciating, on one level, the beauty of the gym-built gay male body.
But unlike Nimmons, who uncomplicatedly exalts it, Boyarin warns that
the emphasis on powerful muscularity reinforces "the dimorphism of the
gendered body and thus participates... in the general cultural standard
of masculinity rather than resisting it." In contributing to the notion
that only one kind of male body is desirable, the gym stud-bunny is
helping to reinforce the valorization of "topness" over receptivity that
already dominates our culture, sexual and otherwise.

The macho-looking gay male is also serving another negative function.
The gym-built body, imitative of stereotypical maleness, all but
announces that "No Sissy Lives Here," thereby encouraging gay men
(including the stud-bunnies themselves) to bury and deny the
gender-discordant traits that made so many of us feel painfully
different in childhood--to repudiate, in other words, "woman-identified"
aspects of the self. ("Gender-discordant" is a necessary but troublesome
term, implying as it does that we know what a gender-concordant
model looks like and that it exists cross-culturally and is superior.
The fine essays in Matthew Rottnek's Sissies and Tomboys further
explore these issues.)

I suspect that if we really do care about breaking down the gender
binary, the place to look for inspiration is not Gold's Gym but the
increasingly visible transgender movement, offering as it does a radical
remodeling of traditional "masculinity" and "femininity." Transgendered
people and gender-discordant gay men are notably absent from Nimmons's
book. So, too, is any discussion of lesbian culture ("Lesbians and gay
men inhabit radically different worlds," is Nimmons's weak
justification). Not accidentally, those who are transgendered,
gender-discordant or lesbian are also rarely seen, if not actually
barred from, the circuit party network. Yet all three belong at the
heart of any comprehensive discussion of a "new" gay culture.

The extent of gender discordance among gay men hasn't been a
front-burner topic since the early 1970s, when radical gay
liberationists championed an androgynous ideal. It's time to stop
avoiding the topic. Boyarin has provided us with a historical context
for dealing with it, and the psychiatrist Richard Isay (among others)
has offered us some provocative contemporary data.

In a 1999 paper in the journal Psychiatry, Isay insists that all
of the several hundred gay men he's treated over the past thirty years
exhibited gender-discordant traits in childhood. (Such traits, it should
be pointed out, are not confined to children who later develop a
same-gender erotic preference: Some fifteen years ago, Richard Green, in
his much-contested book The "Sissy Boy Syndrome" and the Development
of Homosexuality
, found that roughly a third of the
gender-discordant male children he studied became, as adults,
heterosexual in orientation.)

If one accepts--as I do, but Isay does not--the queer theory argument
that "male" and "female" gender roles are not to any significant degree
intrinsic--that is, biologically determined--but are primarily, and
perhaps even exclusively, the products of learning and repetitive
performance, then "gender discordance" becomes something of a non
sequitur: Where all boys are capable of (perhaps even, in the earliest
years, inclined toward) a female-identified--which may be the same as
saying transgendered--self-image and presentation, then no particular
gender configuration can legitimately be seen as "deviant." Boyarin's
Ashkenazic Jews--men whose avoidance of what we call "rough and tumble"
play would, by contemporary standards, be branded as "sissy"--were in
their own culture esteemed as ideal representatives of maleness.

That model of manliness has nothing in common with the currently
fashionable incantation--itself harking back to Jungian twaddle about
"anima" and "animus"--that men "need to get in touch with their feminine
side." No, it's about the need to reinvent for everyone, male and
female, more fluid, expansive self-definitions; it's about moving beyond
gender conformity, beyond gender itself, to molding individually
satisfying selfhoods.

Isay's concern is with current suffering, not with a futuristic path
that might circumvent it. "Gender-discordant" boys, taunted at school
and berated at home (especially by their fathers), internalize the view
that something is "wrong with them," that they're "not OK." And most of
them, from an early age, struggle to divest themselves of the
disapproved behavior--of all traces of effeminacy. The psychic cost, as
Isay points out, is high. In repudiating aspects of the self that could
be read as feminine, the male (straight or gay) does deep injury to his
affective life, including the loss of emotional expressiveness and
resilience, possible separation trauma from the forcibly disavowed yet
still adored mother, and the need to avoid relationships that might
evoke any resurgence of "feminine" traits.

Such speculations should, at a minimum, make us ponder precisely what is
"transformative" (as Nimmons and others claim) about the gym/circuit
culture. Is it expanding our range of expressive options--or narrowing
them? I think we should be wary, too, of the paeans to "erotic
adventuring" that fill The Soul Beneath the Skin (and much of gay
male discourse). I used to write such paeans myself, so feel free to
chalk up my current uncertainty to the onset of old age and the loss of
vital fluids.

We need to keep in mind that there's enormous variation in how gay men
conduct their sexual lives. Even before AIDS, only about 20 percent of
the gay male population pursued erotic exploration in any sustained
way--about the same percentage as those who chose celibacy. Still, even
among long-term gay male couples, roughly three-quarters of them define
"fidelity" in terms of emotional commitment rather than sexual
faithfulness--a much higher percentage than is found among either
lesbian or heterosexual couples.

Nimmons considers this rescripting of monogamy in primary relationships
a "creative" phenomenon. Certainly there's plenty of evidence to support
the view that monogamy is comparatively rare among animal species. In
their recent book The Myth of Monogamy, the husband and wife team
of David Barash and Judith Eve Lipton offer a barrage of information to
the effect that monogamy is "not natural" and certainly "not easy." But
Barash and Lipton also argue that there is no better alternative, "that
open, unstructured, and nonrestrictive sexual relationships" do not make
people happier.

Nimmons is certain they do, and it's a view widely shared among his
crowd of urban gay men. They could be right, but the argument needs to
be mounted, not merely affirmed. When Nimmons claims that gay men have
built "the most complex, flourishing, nuanced sexual culture the planet
has known," it can only mean he's never heard of the Kama Sutra.

And although it may be true that gay people talk "a whole lot dirtier
with spouses and lovers" than straight people do, I wouldn't be too
quick to equate that with either "a stunning cultural accomplishment" or
a revolution--no, not even if we include such additional innovations as
"fuck buddies," "orgy rooms," "glory holes" and "lube guns." Personally,
I'd rather reserve the word "revolution" for that halcyon day when we
manage to eradicate racism, poverty and the subjugation of women.

To be sure, the pursuit of bodily pleasure is, given our puritanical
traditions, decidedly a force for good. But too self-congratulatory a
focus on glutes and orgasms often seems yoked to an undernourished
political sense that comes across, ultimately, as a form of
provincialism light-years removed from any concern with the survival
issues that dominate and defeat most of the planet's
inhabitants--including most of its gay people.

Celebrating what is special and innovative in urban gay male life is a
needed antidote to generations of negative stereotyping. But simply
affirming our cultural achievements won't cut it. We need to weigh them
against theories and evidence that don't simply reflect our community's
self-referential values. A concrete example of what I have in mind would
be to incorporate into our debates about, say, primary relationships the
writings of Stephen Mitchell, one of the founders of relational
psychoanalysis and among the very first to challenge the once-standard
view of homosexuality as pathology. Mitchell's new, posthumously
published book, Can Love Last? The Fate of Romance Over Time, is
not aimed at a gay audience, but the questions it raises assuredly
apply.

The book throws unsettling light on the dynamics of longstanding
relationships, unsettling because Mitchell turns some cherished formulas
on their heads--like the view, shared by many gays and straights alike,
that erotic excitement and domesticity cannot coexist for long. The
usual explanation for their incompatibility is some version of
"familiarity breeds boredom." But in Mitchell's view, turning off to our
primary partner is essentially a function of risk management. We
separate sex and love because otherwise the stakes would be too
high--too likely to heighten dependency and vulnerability, too
threatening to our (illusory) sense of being in control of our lives.

And, Mitchell points out, this is more true for men than women. The
macho masculinity we privilege in our culture, Mitchell argues, is
"easily destabilized by dependency longings." Most men cannot
risk monogamy. And we give them an easy way out: Our cultural
script tells men that for them (unlike women), sexuality is rapacious
and indiscriminate; that the male libido demands adventure.

Mitchell reports that when his patients "complain of dead and lifeless
marriages, it is often possible to show them how precious the deadness
is to them, how carefully maintained and insisted upon." Long-term
partners "collapse their expectations of each other," he writes, "in
collusively arranged, choreographed routine."

We then relocate our sexual desire away from our primary partner,
telling ourselves that he or she has become too familiar to ignite
desire--whereas in fact we're fleeing the threat of deeper knowledge of
the other and deeper exposure of ourselves. We refuse to acknowledge
that our partner, far from having become wholly known or from being
securely centered, is a mysterious multiplicity of selves. But armed
with our denial of the other's (and our own) potential, we rush off to
our one-night stands, threesomes and orgies. Nimmons relabels erotic
adventuring "diffuse intimacy" (the "diffuse" part, anyway, is
unassailable), and urges us to applaud it. Yet in light of Mitchell's
sensitive distinctions, the applause seems too sweeping, too
psychologically naïve.

I'm deeply committed to ending the era of gay apologetics. But we need
to be on guard against the temptation to replace it with an era of
extravagant self-congratulation.

It was an early November morning when I met Gairam Muminov on the steps
of a courthouse on the outskirts of Tashkent, the sprawling capital of
Uzbekistan. He was leaning against a white stone banister, nervously
smoking a cigarette. His thin, sunburned face was carved with deep
furrows and strained by even

deeper worries, which seemed to manifest themselves most intensely
around his dark gray eyes. Inside the courthouse, local authorities were
keeping his son, Abdulvali, locked up for participating in a forbidden
religious group. Although Muminov's job as a builder prevented him from
attending the trial, the 57-year-old father had come that morning to
find out firsthand how long his son would be imprisoned. Abdulvali's
sentencing was scheduled to begin at 10 am.

When the time came, we entered the Akmal Ikramov District Court, a
rundown edifice of cheap marble and concrete located on a dusty road
beside the city's Police Station No. 2. Inside it was dim. On the first
floor, an unusually large, bone-dry fountain and a portrait of Uzbek
President Islam Karimov were visible beneath the few fluorescent lights.
The sentencing was to be held in a room on the second floor. Standing by
the door, in a gloomy hallway, were the families of nine other young
convicts who had been tried with Abdulvali. They waited in an atmosphere
of tense anticipation. Some mothers smoothed out their brightly
patterned dresses in silence; others explained why they thought this
case might be different: With the US-led war on terrorism under way and
renewed international attention brought to the Karimov regime's harsh
crackdown on independent religious expression, they hoped the usually
unforgiving Uzbek justice system might--just this once--tilt toward
leniency.

It was, in many ways, a farfetched hope. The ten men were arrested for
participating in the pan-Islamic group known as Hizb ut-Tahrir, what
Pakistani journalist Ahmed Rashid in his new book calls "the most
popular, widespread underground movement in Uzbekistan, Kyrgyzstan and
Tajikistan." The movement shuns violence but is no less radical because
of that. As Rashid explains, Central Asian acolytes of Hizb ut-Tahrir,
which was founded by dispossessed Palestinians in Saudi Arabia and
Jordan in 1953, foresee "a moment when millions of its supporters will
simply rise up and topple the Central Asian governments--particularly
the Karimov regime--by sheer force of numbers." In place of the region's
various secular states, the movement seeks to fashion a single
Taliban-style Islamic republic stretching from the Caspian Sea to
western China and beyond. It's a threat that the local autocracies, as
well as Washington, take seriously. According to its leadership, Hizb
ut-Tahrir has already attracted tens of thousands of members in the
region. And while two years ago the Clinton Administration narrowly
concluded that the movement did not sponsor terrorist activities, Rashid
argues: "The fear is that young [members]... may soon ignore their
elders' advice and turn to guerrilla warfare."

That fear may be somewhat hasty. But for the government in Tashkent, it
has been amplified by the activities of a much more militant insurgency
known as the Islamic Movement of Uzbekistan, or IMU, whose leaders made
just such a transformation from nonviolence roughly ten years ago. Since
1998, when the IMU officially came into being, it has clashed with the
government forces of three states, engaged in kidnappings and the drug
trade, and engendered an atmosphere of distrust and hostility among the
region's strongmen. The movement's leadership has established close
links with Osama bin Laden's Al Qaeda network and even moved the IMU
headquarters to northern Afghanistan when the more welcoming Taliban
regime was in power. Uzbek President Karimov blames the IMU, among other
opposition groups, for detonating a series of car bombs in Tashkent in
February 1999. The explosions killed thirteen people, injured more than
a hundred and touched off the latest and harshest in a series of
government campaigns against independent religious expression and
political dissent. Following the bombings, Karimov announced that even
the fathers of sons who participated in IMU activities would be
arrested. "If my child chose such a path," he said, "I myself would rip
off his head."

However, again and again, Rashid rightly argues in Jihad: The Rise of
Militant Islam in Central Asia
that the growing popular support for
groups like the IMU and Hizb ut-Tahrir are largely a response to the
corrupt Karimov government's inability to bring even a modicum of
economic prosperity or democracy to Uzbekistan, the region's natural
axis of power. Central Asia has known harsh leadership and violent
upheaval before. Prior to the Soviets there were the czars, and prior to
the czars there were the local khans, who ruled brutally. However, when
the republics of Uzbekistan, Turkmenistan, Tajikistan, Kazakhstan and
Kyrgyzstan fell into independence following the collapse of Communism,
they not only experienced a crisis of national identity (none had ever
existed before as an independent state), they also joined a more
integrated world, where political and economic expectations for open and
fair governance are arguably higher than they have ever been. All this,
at a moment of religious reawakening across the region.

In this context, Central Asia's radical Islamic movements were very much
forged in a modern political pressure cooker. "In a series of crackdowns
in 1992, 1993, and after 1997, Karimov arrested hundreds of ordinary
pious Muslims for alleged links with Islamic fundamentalists, accusing
them of being Wahhabis"--converts to the strict brand of Islam embraced
by the Taliban--"closing down mosques and madrassahs, and forcing
mullahs into jail or exile," Rashid writes. "The result of these
repressive policies has been the growth of exactly what Karimov feared:
extremist Islamic militancy."

A visit to Uzbek courts is a good way to see this machinery in motion:
the steady spinning of the gears that wind moderate Muslims into
radicals. Here too, the display is one of the precarious fragility of
Uzbekistan's current order, and I can think of no better corollary to
Rashid's careful descriptions of a region approaching the edge of chaos
than the observations of Bill Berkeley, a journalist who has spent
numerous years reporting from Africa. "Many suppose that tyranny and
anarchy are at opposite ends of a linear spectrum," Berkeley has
written. "But often they are side by side on what might better be
described as a circle: the one is a product of the other, and vice
versa." For a number of Central Asian states, that circle has been
getting tighter and tighter over the past decade, and the ouster of the
Taliban regime from Afghanistan has done little to prevent it from
shrinking toward its explosive focal point.

The anarchy of tyranny is starkly evident in a place like the Akmal
Ikramov District Court. After Gairam Muminov and the other families had
waited for several hours, frustration and impatience set in. A few
splintered off to find a bailiff or clerk, but no one was able to find
out when, exactly, the sentencing was to occur. An Uzbek journalist
waiting with me explained: "The authorities do this on purpose. They
want to wear people down; they are counting on people like you and me to
get tired, hungry. Maybe we will have to leave for business or lunch,
and then suddenly the doors will open and court begins. This way they
can say they are being open but attract the minimum amount of
attention." However, at 3 pm, when Judge Nizom Rustamov, a stout and
smug man in a shiny sharkskin suit, finally ambled up the courthouse
steps, a slightly different picture emerged--that of the unaccountable
bureaucrat who probably decided against rushing to work simply because
he could. Matilda Bogner, Uzbekistan's Human Rights Watch
representative, described the judge this way: "Rustamov is known to have
sentenced someone to the death penalty for possessing fertilizer at home
because fertilizer can be used as an ingredient in the making of
explosives."

Such capricious power infests Uzbekistan's neighboring governments as
well. As the Soviet Union began to implode, none of the five Central
Asian republics rushed to embrace independence, democracy or economic
reform. Indeed, leaderships in a number of the republics actively
plotted to stymie the demise of the Communist system, however rotted,
because it had been nourishing them so well. As Rashid demonstrates,
this reluctance to break away was to a large degree ironic, given the
region's vast reserves of natural resources--primarily in oil, gas and
minerals--and its potential for prosperity (not to mention the potential
to funnel that prosperity into the hands of local elites). Moreover, as
he points out, "the Soviet policies of closed borders, forced cotton
agriculture, farm collectivization, population relocation and--most
significant--Stalin's redrawing of the map of Central Asia to create
five incongruous states had left the region economically hard-pressed,
[and] ethnically and politically divided."

Ten years on, much of Central Asia remains mired in its Soviet
inheritances: petty and sometimes not-so-petty corruption are a part of
everyday life; news is censored, often heavily; dissidents are
imprisoned, exiled or caused to disappear; resources are squandered;
environmental damage continues unabated. Yet, as the region remains
politically and in many ways economically stagnant, it is experiencing a
demographic surge. "The population gets younger," Rashid notes. "More
than 60 percent of the region's 50 million people are under the age of
25. This new generation is unemployed, poorly educated, and hungry--how
long will it continue to tolerate the decline in living standards and
the lack of rudimentary freedoms?"

There is no easy answer to this question. And Rashid is shrewd enough to
avoid offering one. Just as he is sensitive to the dangers that could
well belong to the region's future, he shows with great nuance that
important differences among the five republics have already led to a
diversity of outcomes. Turkmenistan, for instance, is now ruled by a
bizarre hermit-dictator who had himself decreed President for Life, a
position he plans to hold until 2010, when he intends to retire.
Meanwhile, Kyrgyzstan, the only country of the five not to become an
immediate heir to its Soviet-era leader, has shown a promising
willingness to reform, even if that willingness has waned over the past
several years. However, if these two countries sit at the region's
political poles, the most intriguing case among them may be Tajikistan,
which in Rashid's eyes serves as both a warning and a potential model
for its neighbors.

Not long after the Soviet collapse, mountainous Tajikistan fell into a
five-year civil war that appeared to mirror the conditions in
neighboring Afghanistan. From 1992 to 1997 the multiparty conflict,
which primarily cut across clan lines but also included Islamic rebels,
democrats and former Communist bosses as the main combatants, claimed
the lives of more than 50,000 people and forced roughly 750,000 people
from their homes. In Rashid's view, the primary engine of that conflict
was the Islamic Renaissance Party, or IRP--Central Asia's first popular
Muslim fundamentalist movement--which led a unified band of rebel groups
from headquarters based in Afghanistan and Russia. The fighting might
have ground on indefinitely (or remained frozen in stalemate), but in
1996 "the regional equation changed dramatically when the Taliban
captured Kabul," says Rashid. Fear that the Taliban regime would project
its influence into Afghanistan's post-Soviet neighbors pushed the rest
of Central Asia and Russia to force the Tajik government into making the
necessary concessions for peace. A year later, the parties signed an
agreement that legitimized the IRP and brought it into Tajikistan's new
coalition government.

The complexity of Tajikistan's civil war makes it difficult to summarize
neatly, and perhaps for this reason, coupled with its remoteness, it
received scant attention in the West. For Rashid, though, the outcome is
one that must not be ignored, not only because the peace agreement held
the country together over subsequent years but also because the radical
IRP has seen a dramatic loss in popular support since its inclusion in
government. "In many ways," Rashid argues, "Tajikistan is the key to
peace and stability in Central Asia--something the international
community must recognize, and soon." The logic being: Bringing
fundamentalist Islamic groups into the light rather than driving them
underground is the best way to show that their platforms are unworkable
and at odds with the region's traditionally moderate religious
sentiments.

This may be true, but Tajikistan's civil war is an unlikely example to
prove it, primarily because the conflict was largely one of regionally
based clans vying for political and economic power. Although radical
Islam colored the conflict, it was by no means the driving force. The
coalition government, if anything, was a joining of competing warlords
dressed in various ideologies and beliefs rather than a bridging of
deeply held convictions on secular and Islamic fundamentalist
state-building. This difference must be obvious to Rashid, who awkwardly
suggests the coalition government is an instance of the latter while
acknowledging the former, sometimes in dramatically confusing ways. At
one point, he writes that Soviet "collectivization...had fragmented the
clan structure.... Thus, many Tajiks saw the Islamic revival as a means
to cement a Tajik identity and ensure Tajikistan's development as a
unified state." Then, later, he writes that "most Tajiks identified with
their regions and clans rather than with their country." And later
again: "The civil war had quickly become a battle between clans rather
than an Islamic jihad." This last statement is by far the more realistic
and complete assessment--one echoed by Central Asia scholar Martha Brill
Olcott, who has argued that the "larger issues contested in Tajikistan's
civil war were clearly those of economic and political control."

In fact, the weakness of the government--its inability to protect
Tajikistan's borders and control its rugged territory--has made the
country an ideal base for the region's most extreme militants and best
organized drug traders (often one and the same). Today, roughly 70
percent of the world's heroin funnels through Tajikistan from
Afghanistan, and since the early 1990s Tajikistan's Tavildara Valley has
been an important training area for the IMU's charismatic military
leader Jumaboi Khojaev, a former Soviet paratrooper who later assumed
the name Juma Namangani after his hometown, Namangan, Uzbekistan. The
kind of detailed portrait Rashid has sketched of Namangani, who was
recently reported killed alongside Al Qaeda and Taliban units during the
latest war in Afghanistan, is unparalleled. This is where Rashid is at
his best, especially when he shows how the secretive Central Asian rebel
makes unusual company with Osama bin Laden, despite their close ties.
During one of Rashid's many exclusive interviews in the region, a former
Namangani compatriot explained how the notorious rebel was "shaped by
his own military and political experiences rather than Islamic ideology,
but he hates the Uzbek government--that is what motivates him above all.
In a way, he is a leader by default because no other leader is willing
to take such risks to oppose Karimov."

This in many ways appears to be a capsule characterization of militant
Islam in Central Asia, where religious extremism is primarily harnessed
to the cause of political and military aims, whether in internecine clan
warfare, in insurgencies acting against repression or in the meddling of
outside empires. As readers of the great historian Peter Hopkirk might
recognize, Namangani's pragmatism situates him in a long-running Central
Asian tradition in which strategic objectives rather than fundamentalist
religious ones ultimately lie behind the call to jihad. It was a move
even the Soviets tried. In 1920 Grigori Zinoviev, a close associate of
Lenin, called the Muslims of Central Asia to battle at a weeklong rally
in Baku, Azerbaijan. "Brothers," Zinoviev boomed to a wildly fervent
crowd brandishing swords and revolvers, "we summon you to a holy war, in
the first place against English imperialism!" This display fell in with
a briefly held plan Moscow had at the time: fomenting a chain of
uprisings and establishing an "Army of God" that would penetrate India
through Afghanistan and trigger enough Muslim unrest there to subvert
Britain's hold over South Asia. However, as Hopkirk notes in Setting
the East Ablaze
(and as the United States learned painfully after
aiding militants in Afghanistan in the 1980s), cultivating pan-Islam
"could be double-edged." Religious and nationalist sentiments could just
as easily flow against Moscow. The Basmachis, Central Asia's homegrown
mujahedeen, resisted Soviet power for more than a decade after the
Russian Revolution--and with a good deal of support from the British,
who slipped them caravans of arms and munitions from India.

Today, although the spirit of jihad has largely been unhinged from the
machinations of outside empires intent on controlling the region, its
proponents see themselves very much as bearers of the Basmachi
tradition, as Rashid demonstrates. But his book is also instructive in
pointing out differences between the region's Islamic groups of then and
now. Hizb ut-Tahrir's growing popularity suggests that outside
influences of a very different kind are leaking into Central Asia.
(Along with the IMU, Hizb ut-Tahrir's adherents subscribe to the strict
Wahhabist brand of Islam, which originated in Saudi Arabia, rather than
the more indigenous Sufism, which tends toward mysticism rather than
millenarianism.) This time it's happening at the grassroots--and feeding
off the criminality of local regimes.

There is probably no way to know whether Gairam Muminov's son,
Abdulvali, was truly a member of Hizb ut-Tahrir or was simply caught
praying in the wrong place, or listening to the wrong person, or
carrying the wrong leaflet. I'm sure even his lawyer doesn't know. When
one of the accused suggested that they had been tortured to confess (to
"anti-constitutional crimes"), Judge Rustamov would not hear of it. The
next day, I watched Muminov's hands shoot up to his face when Rustamov
sentenced his son to ten years of imprisonment. And as the father slowly
drew his shaky fingers away, his mouth fell open, his eyes turned blank.
I wondered: Earlier, this man shrugged off my criticisms of Uzbekistan's
ironfisted approach to dissent, saying he had all the freedom in the
world--limitless choices in the marketplace, among whichever apples and
oranges he desired. Was that still good enough for him?

That is a question the United States must begin asking if it intends to
become more active in fostering stability in the region. Rashid's
book--which follows his bestseller, Taliban--was rushed to
publication after September 11, so it is understandably short on
evaluating current US Central Asia policy. But it is the first good,
hard look at the region's Islamic movements and deserves the attention
of policymakers and interested everyday readers alike. The careful
consideration Rashid has given the grassroots causes that set these
insurgencies into motion will keep this book relevant for a long time to
come. As Rashid argues: "The Clinton administration policy of helping
Central Asia's repressive governments combat terrorism whilst mildly
lecturing them on their human-rights violations did not constitute a
strategic vision for the region." It still doesn't. Under the George W.
Bush Administration, military and economic aid to the region has
increased; so too, it seems, has the repression.

Reading Robert Caro to learn about Lyndon Johnson is like going to an
elaborate buffet in order to get the four basic food groups; they both
give you what you need along with much, much more. In fact, we're only
at the appetizers, since Caro's third and latest volume, Master of
the Senate
, comes in at over1,000 pages and still doesn't take the
story up through the 1960 election! Nonetheless, both are experiences to
be savored. Caro is a gifted and passionate writer, and his
all-encompassing approach to understanding LBJ provides readers with a
panoramic history of twentieth-century American politics as well as a
compelling discourse on the nature and uses of political power.

Moreover, in the midst of the plagiarism contretemps over Stephen
Ambrose and Doris Kearns Goodwin, it is refreshing to read a popular
history that is original and well written. There is clearly no "Caro
Inc." with an army of researchers cutting and pasting books together as
fast as the printing presses can take them. Aided only by his wife, Ina,
Caro's project is now in its third decade. This slow pace results from a
methodical and exhaustive research process. One might well disagree with
Caro's analysis and interpretations, but no one can accuse him of
overlooking an important piece of evidence.

In reality, Master of the Senate is not one book but several.
Caro sets the stage with a history of the United States Senate. The
Senate is virtually unique among legislative bodies in any modern
democracy. With its six-year terms, equal representation for each state
regardless of population and its tradition of unlimited debate, the
Senate is an institution designed for inaction. Individual senators have
little or no incentive to yoke themselves together to advance the
national interest. By the time Johnson entered the Senate in 1949, the
body was increasingly seen as too inefficient to meet the demands of
modern government. Since the turn of the century, the President had
increasingly usurped its power in foreign policy, and many observers
predicted that the Senate would eventually have to go the way of most
legislative upper chambers and become, in effect, an American House of
Lords.

That the Senate did not wither away and the reasons for this fact form
the basis for another of Caro's books within a book, Lyndon Johnson's
ascent to "Master of the Senate." Possessed of ambition that can only be
described as obsessive, Johnson campaigned to increase his own power and
influence with a relentlessness and ruthlessness that would have made
Machiavelli blush.

Before Johnson could amass power in the Senate, however, he first had to
shore up his political base in Texas. Having only narrowly "won" (stolen
is the more appropriate word, as Caro vividly and convincingly
demonstrated in his previous volume) election to the Senate in 1948,
Johnson now had to prove his fealty to the Lone Star State's reactionary
and powerful oil and gas titans. To do so, Johnson organized a
behind-the-scenes campaign to block President Truman's reappointment of
Leland Olds as chairman of the Federal Power Commission. A staunch New
Dealer and a committed public servant, Olds had used his position at the
FPC to make sure that electric and natural gas companies did not gouge
their customers. As a result, he was anathema to the Texas natural gas
companies, who saw even the smallest and most reasonable limitation of
their already vast profits as socialist tyranny.

In earlier days, Johnson had fought the same fight as Olds, working as a
freshman Congressman to provide cheap electricity to rural farmers.
Doing so had secured Johnson a place in the hearts of his poor Texas
Hill Country constituents, but that counted for little against the
political power of the state's oil and gas industry. Ambition now
required Johnson to destroy Leland Olds. Unable to attack him on the
substance of his work at the FPC, Johnson instead distorted Olds's
writings as a journalist in the 1920s to portray him as a Communist.
Using a phrase that Joe McCarthy would have appreciated, Johnson
denounced Olds on the floor of the Senate, asking, "Shall we have a
commissioner or a commissar?" The choice of the Senate was clear; the
Olds reappointment failed by a vote of 53 to 15.

The Olds fight secured Johnson's political base and brought him into the
warm embrace of the Texas establishment. After his victory over Olds,
Johnson flew back to Texas on the private plane of Brown & Root, the
giant Texas construction company. "When the Brown & Root plane
delivered him to Texas, it delivered him first to Houston, where a Brown
& Root limousine met him and took him to the Brown & Root suite
in the Lamar Hotel. Waiting for him there, in Suite 8-F, were men who
really mattered in Texas: Herman and George Brown, of course, and oilman
Jim Abercrombie and insurance magnate Gus Wortham. And during the two
months he spent in Texas thereafter, the Senator spent time at Brown
& Root's hunting camp at Falfurrias, and in oilman Sid Richardson's
suite in the Fort Worth Club."

Caro shows how, having won over the men who really mattered in Texas,
Johnson set out to win over the men who really mattered in the Senate,
the "Old Bulls." As a result of the Solid South and the seniority rule,
nearly all of these men were the Southern barons who controlled the
powerful Senate committees. In many ways, currying favor with the Texas
establishment had been relatively easy; all it had required was
destroying the naïve and principled Leland Olds. But the Old Bulls,
men like Harry Byrd Sr. of Virginia, Walter George of Georgia and
Kenneth McKellar of Tennessee, were a much tougher crowd, not easily
deceived and viciously protective of their power and prerogatives.
Traditionally, one did not attain power by winning over such men;
rather, power came by becoming one of them. But this required the time
and patience necessary to accumulate enough seniority to land a choice
committee assignment and then more time and patience to ascend to the
chairmanship.

But, as Caro points out, Johnson had a very short supply of time and
patience. Indeed, he had risked everything to run for the Senate in 1948
in order to avoid the seniority trap of the House. Now he found himself
in the same bind. Even before he was sworn in, Johnson tried to persuade
the venerable Carl Hayden, chairman of the Senate Rules Committee, which
was in charge of office space, to give him an extra room. When Johnson
pressed his case too zealously, the usually courteous Hayden shut him
down, saying, "The trouble with you, Senator, is that you don't have the
seniority of a jackrabbit."

If Johnson didn't have the seniority to become one of the Old Bulls, he
would surely do everything he could to gain their favor. The usual
method was obsequiousness, telling these men how powerful and important
they were, and how much he had learned from them. According to Caro,
Johnson's behavior "proved the adage that no excess was possible."

One device, also favored by a more recent Texas politician, was to
bestow nicknames. Edwin "Big Ed" Johnson of Colorado was dubbed "Mr.
Wisdom," while Leverett Saltonstall of Massachusetts became "Old Oil on
Troubled Waters." Johnson resented having to use such tactics, telling
aide John Connally after fawning over a senior senator, "Christ, I've
been kissing asses all my life"; but ass-kissing worked. As Caro writes,
"In December, Hayden had refused to give Johnson that extra room in the
basement that he had asked for; in February Hayden found that an extra
room was, indeed, available."

While Hayden had the power to provide extra office space, real power in
the Senate rested with the acknowledged leader of the Old Bulls, Richard
Russell of Georgia. Just as Johnson in his earlier career had gained
power by making himself a protégé of House Speaker Sam
Rayburn and President Franklin Roosevelt, he now set out to cultivate
Russell. Though different in temperament and politics, all three men
shared a common element that Johnson used to ingratiate himself: As Caro
points out, all three men were lonely. Both Rayburn and Russell were
childless bachelors, while Roosevelt was largely estranged from his
children and wife. This provided the perfect opportunity for Johnson to
be the dutiful son and companion.

Mere companionship and filial piety, however, were not enough to win
over Russell. According to Caro, "It wasn't a son that Richard Russell
wanted, it was a soldier--a soldier for the Cause." And that cause was
white supremacy. In describing Russell's views on this issue, Caro shows
that while they were almost always cast as a reasoned, nonracist defense
of states' rights, racism was at their core, and such moderation was
merely tactical. "His charm," writes Caro, "was more effective than
chains in keeping blacks shackled to their terrible past." Caro's
description of Russell is not just of historical interest. With calls
for states' rights gaining renewed popularity and legitimacy, it is
important to remember that while not every states' rights advocate is a
closet racist, nearly every advocate of racial inequality has used
states' rights to cloak his real aims and beliefs.

Johnson was willing to take up arms for Russell's cause. In his maiden
speech in the Senate, Johnson denounced President Truman's call for
civil rights legislation in the same reasoned tones used by Russell.
When Johnson finished, Russell was the first to shake his hand, telling
him that his speech was "one of the ablest I have ever heard on the
subject."

Having gained Russell's and the Old Bulls' trust, Johnson now began to
build his own power. In 1950, after the outbreak of the Korean War, he
convinced Russell to allow him to chair a special committee on
preparedness. Caro's description of Johnson's committee is a textbook
example of the Washington version of stone soup, in which, with the
right skills and connections, one can turn nothing into something. For
the most part, the committee did very little original research or
investigation, instead recycling work done by other committees and
agencies. The difference, however, was that Johnson had a gift for
working the media. In this pretelevision era, the term "soundbite" had
yet to be coined, but Johnson was a master of it nonetheless. The
committee's first report was really an earlier, prewar report on the
nation's rubber supply. In the hands of Johnson and his staffer Horace
Busby, the report became a major story. "Phrases like 'darkest days,'
'business as usual,' 'too little and too late' leapt out of the final
report," writes Caro. Newspapers were particularly enamored of Johnson's
description of Defense Department desuetude as "siesta psychology."

Despite, or perhaps because of, the lack of substance, the preparedness
committee gave Johnson his first national attention. But the favor of
the Old Bulls and a handful of headlines were not nearly enough to
secure Johnson's ultimate prize, the presidency. Recognizing that the
traditional path to power in the Senate, and ultimately to the White
House, was still largely closed to a junior senator, Johnson decided to
create his own path. Here was where Johnson's cunning as a political
entrepreneur came into play. As Caro writes:

Lyndon Johnson's political genius was creative not merely in the lower,
technical aspects of politics but on much higher levels. And if there
was a single aspect of his creativity that had been, throughout his
career, most impressive, it was his capacity to look at an institution
that possessed only limited political power--an institution that no one
else thought of having the potential for any more than limited political
power--and to see in that institution the potential for such substantial
political power; to transform that institution so that it possessed such
power, and in the process of transforming it, to reap from that
transformation substantial personal power for himself.

Johnson aide Bobby Baker put it more succinctly: "He knows what makes
the mules plow."

The institution that Johnson chose was the party leadership of the
Senate. Almost utterly lacking in formal power, party leadership was
more often the graveyard of political careers than the launching pad. No
Senate Democratic leader had possessed any influence to speak of since
Joseph Robinson in the 1930s. The Democratic leaders immediately
preceding Johnson, Scott Lucas of Illinois and Ernest McFarland of
Arizona, had been disasters, utterly incapable of bridging the
differences between the party's liberal Northern and conservative
Southern wings. In fact, the demands of the job had contributed to the
election defeats of both men, Lucas in 1950 and McFarland in 1952. Now,
following the Republican sweep of 1952, the position of minority leader
stood open. Since no else wanted the position, Johnson, with Russell's
blessing, ascended to the post. Only four years into his first term,
Lyndon Johnson was now at least the nominal leader of the Senate
Democrats.

And Johnson soon converted nominal leadership in their power, explaining
that they needed to put their best people forward to defend against the
Republicans. But that would require handing out committee positions on
the basis of ability, not seniority. Using a combination of persuasion
and horse-trading, Johnson managed to make enough room to place every
Democrat on at least one major committee. In doing so, he transformed
the Senate, imbuing its committees, at least on the Democratic side,
with fresh blood. More important for Johnson, his own power had been
enhanced greatly. Dozens of members, liberals and conservatives,
Northerners and Southerners, now owed their committee assignments to
him, and that meant power.

Revamping the seniority system was but the first way Johnson became
master of the Senate. While much has been written about the famous
Johnson "treatment," LBJ's in-your-face style of persuasion, Caro
demonstrates that these skills, effective though they were, were not the
only ones at his disposal. Deploying a skilled staff, he soon knew more
about what was happening in the Senate than any other member, making him
the "go-to guy" for information. He managed to negotiate unanimous
consent agreements to limit debate, so that minor bills of importance to
individual senators could be passed with dispatch. Johnson was also a
skilled parliamentarian, using his knowledge of Senate rules and
procedures to outwit the majority Republicans. Finally, Johnson had an
astute grasp of national politics, demonstrated most effectively in the
battle over the Bricker Amendment. Advanced by Republican isolationists,
the constitutional amendment would have severely restricted presidential
power in foreign policy by requiring treaties to be approved by the
state legislatures as well as the Senate. Johnson not only managed to
defeat the amendment but to do so in a way that aligned the Democrats
with the popular Eisenhower against Congressional Republicans.

No method was beneath Johnson. He was just as willing to destroy the
careers of his Senate colleagues as he had been with Leland Olds.
Perhaps more than any other senator, Kentucky's Earle Clements had been
loyal to Johnson, "dog loyal," in Caro's words. But after a bill
supported by Johnson failed to pass on a tie vote, Johnson forced
Clements to switch his vote, although he knew it would destroy
Clements's re-election hopes. In the case of Virgil Chapman, also of
Kentucky, Johnson helped to destroy not only his career but his life.
Even though Johnson knew Chapman was falling further and further into
the depths of alcoholism, his response was not compassion but
manipulation. He would bring Chapman to his office after the Senate
recessed and ply him with drinks until the inebriated Kentuckian would
agree to anything Johnson wanted. Chapman eventually died in a drunk
driving accident.

Johnson's success as minority leader helped the Democrats regain control
of the Senate after the 1954 elections. Now the majority leader, Johnson
further extended his power. As a consequence, the Senate began to act
with new efficiency and effectiveness. And even though Johnson never
strayed too far from Russell and the other conservative senators upon
whom he relied, he still managed to help Democratic liberals to achieve
at least some of their legislative goals. By the mid-1950s, the changes
wrought by Johnson had dispelled much of the criticism leveled against
the Senate.

Caro, however, suggests that Johnson might have destroyed the Senate in
order to save it, since these changes came at the cost of diminishing
deliberations, where individual senators could educate and inform the
public on the great issues of the day. He quotes Paul Douglas, liberal
Democratic senator from Illinois during the 1950s and oftentimes a foe
of Johnson, who charged, "Under Johnson, the Senate functions like a
Greek tragedy. All the action takes place offstage, before the play
begins. Nothing is left to open and spontaneous debate, nothing is left
to the participants but the enactment of their prescribed roles." Caro
goes further, suggesting that by limiting debate, Johnson was making the
Senate an expression of his own mania for control and aversion to debate
and dissent.

Regardless of Johnson's real motivations for limiting debate, this is an
overly romantic view of Senate proceedings, in which debate consists
more of partisan bickering and mundane bloviating than reasoned and
informed discourse. Furthermore, unlimited debate is tailor-made for
defenders of the status quo, allowing them great power to block any
measure to which they object. Caro even seems to acknowledge this in a
footnote, where he quotes Johnson aide Harry McPherson, "Complaints
about limiting debates...often turned out to be based on a plaintiff's
annoyance that he must either miss a vote or forgo a speaking engagement
back home. And besides, who knew better than liberals the enervating
consequences of unlimited debate."

Caro may be right that Johnson saved the Senate, but he doesn't consider
whether it was worth saving in the first place. Yes, Johnson did reform
the chamber so that it could legislate more effectively, but the
institution remained and remains a throwback to a predemocratic era. Not
only does the Senate's equal representation of states grossly distort
the one-person, one-vote principle, but the ability to filibuster means
that forty-one senators, even if they represent the twenty-one smallest
states (with only 11 percent of the total population), can veto any
piece of legislation. And since Republicans predominate in small states,
the institution serves only to magnify their power. For example, even
though Democrats have a 50-49 edge in the current Senate (the
remaining member is Independent Jim Jeffords of Vermont), sixty senators
represent states won by George W. Bush in the 2000 election. By saving
the Senate, one might argue, Johnson only succeeded in maintaining an
institution that has traditionally served to reinforce conservatives and
the status quo.

In 1956, Johnson thought the time was right to make his move for the
Democratic nomination. But this effort was doomed before it even began.
First, he refused to be an active candidate, thus much of the support
from the South and West that might have been his if he wanted it went to
other candidates. Even if Johnson had run a more active and skillful
campaign, it was clear that he never had enough liberal support to win
the nomination. For all that he had accomplished in the Senate, Johnson
was still viewed as suspect by Democratic liberals. In some ways, as
Caro suggests, the liberals' criticism was unfair. Johnson was no Hubert
Humphrey, to be sure, but he was also no Richard Russell or James
Eastland. During his twelve years in the Senate, Johnson's Americans for
Democratic Action liberal-voting score was fifty-six, just about average
for the party and essentially splitting the difference between the
Southern Democratic average of thirty-seven and the Northern Democratic
average of seventy-five. Moreover, during his tenure as majority leader
from 1955 to 1960, Johnson's average score was sixty-five.

But Johnson recognized that his overall ADA score was not the real
issue. By the mid-1950s, Democratic liberals increasingly used civil
rights as a litmus test for support. According to Caro, Johnson would
tell friends privately, "I want to run the Senate. I want to pass the
bills that need to be passed. I want my party to do right. But all I
ever hear from the liberals is Nigra, Nigra, Nigra." (During the 1964
campaign, Johnson would use the same refrain in a very different
context, telling a New Orleans audience of a dying Southern senator who
wanted to give one more speech, a good Democratic speech, because the
only speeches the people of his state ever heard were "Nigra, Nigra,
Nigra.") Caro goes on to add that the conclusion for Johnson was clear:

He knew now that the only way to realize his great ambition was to
fight--really fight, fight aggressively and effectively--for civil
rights; in fact, it was probably necessary for him not only to fight but
to fight and win: given their conviction that he controlled the Senate,
the only way the liberals would be satisfied of his good intentions
would be if that body passed a civil rights bill. But therein lay a
seemingly insoluble dilemma: that way--the only way--did not seem a
possible way. Because while he couldn't win his party's presidential
nomination with only southern support, he couldn't win it with only
northern support either. Scrubbing off the southern taint thoroughly
enough within the next four years to become so overwhelmingly a liberal
favorite that he could win the nomination with northern votes alone was
obviously out of the question, so dispensing with southern support was
not feasible: he had to keep the states of the Old Confederacy on his
side. And yet a public official who fought for civil rights invariably
lost those states.

This dilemma sets up another book within a book and the dramatic climax
of Master of the Senate, the battle over the 1957 Civil Rights
Act. This is where Caro's gifts as a storyteller really come alive, and
his account provides what is surely one of the best analyses of the
legislative process ever written. Moreover, Caro is right to label
Johnson's role in the passage of this legislation as an exercise of
"genius." But Caro goes too far in suggesting that the 1957 Civil Rights
Act marked a turning point at which Johnson's "compassion, and the
ability to make compassion meaningful, would shine forth at last."

Caro does recognize that the practical impact of the 1957 legislation
was inconsequential and far less significant than the later Civil Rights
Act of 1964 or the Voting Rights Act of 1965. And while the bill's
proponents described it as half a loaf, Caro agrees with Humphrey, who
described it as a "crumb." Nonetheless, Caro claims that as the first
civil rights measure to pass the Senate and to be enacted into law since
1875, the legislation was of immense symbolic importance and the
harbinger of things to come. "The Civil Rights Act of 1957," according
to Caro, "was hope." Caro has a point, but a debatable one. The law did
raise hopes, but by accomplishing so little, many of those hopes ended
up dashed. Furthermore, while the 1957 act was a first step toward more
effective legislation, it would take another eight years to complete the
journey, eight more years of Jim Crow and disfranchisement, of
oppression and violence. Hope was better than nothing, but help is what
was really needed.

And help would have been provided then, if not for Lyndon Johnson. Help
was contained in the civil rights bill proposed by the Eisenhower
Administration and passed by the House, with strong provisions against
discrimination in public accommodations and voting, along with effective
enforcement mechanisms. But Johnson knew that such a bill was utterly
unacceptable to his Southern colleagues. Thus, while Johnson recognized
that he had to fight for a civil rights bill, it couldn't be
this civil rights bill.

Consequently, Johnson's first maneuver was to help defeat an effort by
Republicans and liberal Democrats to rewrite Senate Rule 22 in order to
short-circuit the expected Southern filibuster. At the opening of the
1957 session, pro-civil rights senators sought a ruling from Vice
President Richard Nixon, acting in his capacity as the Senate's
presiding officer, that the Senate was not a continuing body and
therefore was not bound by previous rules. That would mean that a
majority of senators could establish a new rule allowing debate to be
shut off with only a simple majority, not the usual and nearly
unobtainable sixty-four votes. Indeed, Nixon, hoping to swing black
votes to the GOP, would have issued such a decision. But before he could
do so, Johnson used his prerogative as majority leader to move to table
the proposed rules change. Using all the skill and power he had amassed
as majority leader, Johnson managed to get a majority for his motion.
But it was a 55-38 tally. If only seven votes had gone the other
way (the three absentees having announced against Johnson's motion), the
motion would have lost, Nixon would have issued his decision, the
filibuster would have been broken and an effective civil rights bill
would have been passed in 1957, not 1964. As a result of the defeat on
Rule 22, the bill that ultimately did pass was only a very weak voting
rights measure.

If ever one needs evidence of the contingency of history, imagine, if
you will, those seven votes going the other way. Jim Crow would have
died in the late 1950s, avoiding much of the tumult of the 1960s. The
Republicans, led by Richard Nixon, would have been the party of civil
rights, not the Democrats and Lyndon Johnson. From there, one can spin
off any number of plausible scenarios that result in a very different
history of the past forty years.

But none of these scenarios were acceptable to the Lyndon Johnson of
1957, since they would have conflicted with his ambition; and at that
point, despite Caro's claim, his ambition was still more important than
his compassion. Switching sides on Rule 22 would have destroyed his
Southern support and with it any chance he had of becoming President.
Johnson's compassion would eventually shine through, and as a result,
civil rights would eventually come to black America. But they would not
come until Lyndon Johnson's ambition would allow them to come.

Lynne Cheney sees the world in black and white. Or, rather, in red, white and blue.

The twentieth century was arguably the bloodiest in modern history,
earning from one commentator the moniker of the Age of Barbarism. From
the Nazi genocide, to the killing fields of Cambodia and Rwanda, to the
"ethnically cleansed" areas of the former Yugoslavia, the twentieth
century was one of unprecedented horror for many.

Mass slaughter of civilians is, of course, much older than these
horrors. The modern world brought about by European expansionism, the
famed Pakistani intellectual Eqbal Ahmad once observed, is a time of
extraordinary unrecorded holocausts. How many of us, for instance, are
familiar with the deaths of upward of 10 million in the
Belgian-controlled Congo in the latter nineteenth and early twentieth
centuries? Or how about Australia's extermination of the indigenous
population of Tasmania? The decimation of inferior races in settler
colonies, brought about by Western imperialism and the associated
legitimizing ideologies, in fact, contends Sven Lindqvist in his
brilliant Exterminate All the Brutes, ostensibly laid the
groundwork for Hitler's crimes by creating particular habits of thought
and political precedents.

What was unique to the twentieth century--and thus the subtitle of
Samantha Power's very impressive "A Problem From Hell": America and
the Age of Genocide--
was the invention of the very word "genocide"
and its establishment as a legal construct outlawing one of the most
egregious forms of state terror. That represents a great advancement in
the construction of international law and associated political and
juridical mechanisms, but the fact that genocide continues to occur and
to go unpunished speaks to the difficulties of giving life to a legal
regime.

While the parties most responsible for this shortcoming are those that
perpetrate genocide, Power focuses much of her opprobrium on the party
that is in her estimation best positioned to put an end to or at least
significantly curb such horror: the US government. "No US President has
ever made genocide prevention a priority," she writes, "and no US
President has ever suffered politically for his indifference to its
occurrence. It is thus no coincidence that genocide rages on."

The myriad horror stories of this age of genocide have many ugly
characters, several of whom Power profiles in her well written and
extensively documented book. But there are also many heroes, namely
those within and without the US government who have spoken the
proverbial truth to power with the goal of making Washington appreciate
or acknowledge--and thus take appropriate action--that genocide was
taking place in the various case studies that Power carefully details.

Perhaps the biggest hero in Power's book is Raphael Lemkin. A Polish Jew
who as a young boy had a fascination with the history of mass
slaughters, Lemkin became a lawyer and international legal scholar. He
set out to ban the destruction of ethnic, national or religious groups,
to end the national sovereignty-granted impunity of state actors
who perpetrate such atrocities and to insure universal jurisdiction for
their prosecution.

Forced to flee his homeland when the Nazi army invaded in 1939, Lemkin
ended up in the United States soon thereafter. He worked indefatigably
to bring attention to and to record Hitler's extermination of Jews,
while urging Americans to do everything they could to put a stop to it.
At the same time, he endeavored to invent a word to characterize such
slaughters, one that, in Power's words, "would connote a practice so
horrid and so irreparable that the very utterance of the word would
galvanize all who heard it." When he coined the term "genocide" in 1944,
Western governments and political pundits quickly embraced it. This led
Lemkin to assume that actions to codify the term and fight the practices
comprehended in it would quickly follow. He soon learned that he had a
long fight on his hands--one that he waged incessantly until he died,
penniless, in 1959.

Before his demise, however, Lemkin saw the United Nations General
Assembly pass the genocide convention on December 9, 1948, the body's
first passage of a human rights treaty. And less than two years later,
the necessary twenty countries had ratified the convention, making it
international law. But he did not live to see the United States ratify
it, a necessary step, Lemkin thought, to insure its enforcement, given
American power. Indeed, it would not be until 1988 that the Senate did
so, but not before attaching a set of reservations, understandings and
declarations that insured that the United States itself could never be
charged with the crime, thus rendering American approval largely
symbolic.

The architects of the convention understood the danger of making
Hitler's crimes the standard by which to determine future genocides.
States must be able to identify as genocide acts aimed at destroying "in
whole or in part, a national, ethnic, racial or religious group"--the
legal definition of the crime--well before they have the chance to reach
such a scale in order to trigger appropriate actions. (The convention
enjoins its signatories to take measures to prevent and punish the
crime.) Despite such intentions, the link between genocide and Hitler's
so-called Final Solution "would cause endless confusion for
policy-makers and ordinary people who assumed that genocide occurred
only where the perpetrator of atrocity could be shown, like Hitler, to
possess an intent to exterminate every last member of an ethnic,
national or religious group."

While the Hitler-standard problem did help to undermine effective
responses by American officials and opinion-makers to various
post-World War II genocides, there were other dilemmas as well,
including the difficulty of believing reports of horrific slaughter.
Even in the face of extensive and graphic media coverage, Power writes,
"American policymakers, journalists and citizens are extremely slow to
muster the imagination needed to reckon with evil." In addition, there
is a tendency to assume, before the fact, that the would-be perpetrators
of genocide are rational actors who will not engage in horrific terror;
that traditional diplomacy can resolve the crisis; and that civilians
who keep a low profile during the conflict will survive. At the same
time, cold geopolitical calculations underlie official reactions, and
they often spin the violence as two-sided, a result of age-old hatreds
and thus inevitable, while arguing that any type of serious intervention
would be futile and even counterproductive. Thus, not only does
Washington abstain from sending troops but it also takes very few steps
along a continuum of potential interventions to deter genocide.

This nonresponse, Power demonstrates, is not something unique to the
presidencies of George Bush Sr. and Bill Clinton, who emerge looking
especially bad. It manifested itself to varying degrees in all the cases
she examines, beginning with the Ottoman Turks' slaughter of almost a
million Armenians in 1915. The United States under Woodrow
Wilson--despite being well informed of Turkey's crimes--did not support
the Allies' condemnation of Turkey's crimes against humanity, lest such
support undermine American neutrality. Disregarding the pleas of
Washington's ambassador, Henry Morgenthau, the Wilson Administration
refused even to issue a direct government-to-government appeal to cease
the killings or to pressure the Turkish authorities to allow
humanitarian aid deliveries to Armenians driven from their homes and on
the brink of starvation. For Power, Wilson's nonresponse "established
patterns that would be repeated."

But as Power illustrates, it was not simply that the United States did
nothing. Often Washington indirectly and directly aided the
genocidaires. In Cambodia, for example, the US bombing that
preceded Pol Pot's seizure of power "killed tens of thousands of
civilians." While horrific in its own right, "it also indirectly helped
give rise to a monstrous regime" responsible for the deaths of upwards
of an estimated 2 million Cambodians. And in the case of Iraq's
slaughter of the Kurds, the Reagan White House dismissed reports of
Saddam Hussein's gassings and other atrocities while maintaining aid to
his regime, preferring to maintain its unholy alliance with Iraq in its
war with Iran. The year after Saddam's forces decimated several thousand
Iraqi Kurdish villages and killed close to 100,000 Kurdish civilians
(1987-88), Washington, now under Bush Sr., actually doubled the
amount of agricultural credit it had been providing to Saddam's regime,
increasing it to more than $1 billion.

In other cases, the United States helped to undermine effective
international responses to genocide. Perhaps the most shameful case was
that involving the Clinton Administration during the 1994 slaughter in
Rwanda, which involved the killing of approximately 800,000 Tutsis and
moderate Hutus in the span of 100 days, making it the fastest, most
efficient killing spree of the twentieth century. Clinton, whom Power
inexplicably refers to as "a committed multilateralist," one with "faith
in the United Nations," did everything he could to avoid doing something
constructive. Throughout, and similar to their conduct through much of
the Serb-perpetrated atrocities in Bosnia, Administration officials
feigned ignorance of what was going on. US intelligence reports had
warned Washington of the likelihood of mass killings in Rwanda.
Nevertheless, Clinton refused Belgium's request to reinforce the small
UN peacekeeping mission to the country. And once the killing started,
the Administration denied almost until the end that genocide was taking
place, despite full knowledge to the contrary. To do otherwise would
have required that Washington take appropriate action. Instead, the
Administration insisted that UN peacekeepers withdraw from Rwanda and
then refused to authorize the deployment of a stronger UN force. It was
not until the Rwandan Patriotic Front had driven most of the
perpetrators out of the country and seized power in the capital that
Clinton ordered the closing of the Rwandan Embassy in Washington and the
seizure of its assets.

In her investigation, Power justifies her choice of case studies by two
key criteria: that each meets the terms of the 1948 genocide convention;
and that it presented the United States with the options for meaningful
diplomatic, economic, legal or military intervention. But as we shall
see, it is questionable whether all her cases satisfy the criteria.

In terms of the first, to suggest that what took place in Kosovo was a
genocide, or would have been had NATO not intervened, is a highly
contentious issue in the international legal and human rights community.
As for the Khmer Rouge, while they were guilty of killing large
percentages of the country's Muslim Chams, Vietnamese and Buddhist
monks, the bulk of their human targets were alleged political enemies.
In this regard, these killings would not form part of a genocide, at
least through the narrow criteria of the 1948 convention.

As Power explains, the architects of the genocide convention made the
explicit decision to exclude political groups--a move actively supported
by Lemkin. They did so in order to insure the support of many countries,
largely those of the Soviet bloc and some from Latin America as well,
that feared the inclusion of political groups would inhibit the ability
of states to suppress armed rebellions within their boundaries. It
appears that Lemkin was sympathetic to neither the underlying
assumptions nor the implications of such an argument but supported it
for pragmatic reasons--a position that Power seems to share. This might
explain why she has no problem including the horrors inflicted by the
Khmer Rouge under the general rubric of genocide. But given this more
flexible notion of what constitutes genocide, it begs the question of
why Power chose the cases she did in laying out her argument and ignored
other possible instances.

This question also relates to the second criterion for her choices,
namely that the United States had a variety of options available for
meaningful intervention. Here, Power is treading on even weaker ground
in some instances.

On Rwanda and Bosnia, Power makes her most convincing case that there
were concrete steps the United States could have taken that would have
had significant effects in lessening the bloodletting. In other
instances she examines, however, such as those of the Nazi and Khmer
Rouge holocausts, she is less convincing. Regarding Cambodia, for
example, she contends that the Khmer Rouge were less immune to outside
criticism than was claimed by American authorities. In this regard, she
argues that "bilateral denunciations by the United States may well have
had little effect on the Khmer Rouge's internal practices.
Unfortunately, because so few US officials spoke out publicly against
the genocide, we cannot know." In terms of the Nazis, Power appeals to
conventional wisdom and suggests that Washington could have done things
to prevent Hitler's crimes, but makes no serious effort to persuade the
reader or to engage the literature that has called such arguments into
question. As Peter Novick argues in his much-acclaimed The Holocaust
in American Life
, the various ex post facto proposals for rescuing
Jews from Nazi clutches ignore what were very real constraints at the
time and often would have been of little practical use. Substantial
rescue efforts, Novick contends, would have had a marginal effect at
best. (Nevertheless, he asserts, it would have been worthwhile to carry
out the proposed actions; but they would have saved 1, or perhaps 2
percent at most, of those who died.)

Power applauds US action loudly in the case of Kosovo. Indeed, she
argues that hundreds of thousands of lives would have been lost had the
United States and its NATO allies not engaged in the bombing campaign
against the Serbs. She offers no substantiation for this claim. And, of
course, how could she? Perhaps the greatest weakness of the Kosovo
chapter, however, is that she does not engage any of the critiques put
forth by the likes of Noam Chomsky and other commentators--many writing
in this magazine--that there were alternatives to the NATO action, ones
that would have been consistent with international law and might have
actually lessened the killings and expulsions that increased
dramatically after the start of the bombing, to say nothing about its
effects on Serb civilians. At the very least, Power should have
presented and grappled with such arguments. Hardly anyone contends that
Milosevic & Co. were not capable and guilty of enormous brutality.
Indeed, Power graphically shows how Serb forces put this capacity to
horrific and massive use in Bosnia and the fatal consequences of the
failure of the West to acknowledge the bloodshed and respond
appropriately. In this regard, mass killings in Kosovo were arguably a
distinct possibility. But the question remains, Were there courses of
action other than that taken up by Washington and its NATO allies?

Power understandably feels outrage at international and, more
specifically, American inaction in the face of mass killing. With an
American audience in mind, she challenges the reader to do
something--whatever is in her power--to suppress and/or bring to justice
those responsible for the slaughter of innocents. She makes a compelling
case for a collective moral, as well as an international legal,
obligation for the US government to do so. But this also raises what is
perhaps the biggest problem with "A Problem From Hell": Even
though she acknowledges that the United States sometimes directly and
indirectly aids genocidal regimes, the overall effect of her examples
and the manner in which she frames the book is to situate Washington as
an outsider to such horrors. In the book's final pages, for example, she
asks, "Why does the United States stand so idly by?" In this sense,
Power's choice of cases is quite safe. Had she looked beyond the
parameters of the conventional and examined instances in which the
American role in mass slaughter has been less that of a bystander and
more that of a partner-in-crime perpetrator, her call for greater levels
of US intervention would seem at best unpersuasive and at worst
hypocritical and potentially dangerous. Three cases--those of Indonesia,
East Timor and Guatemala--illustrate this point.

Led by General Suharto, the Indonesian military and the civilian militia
that it armed and directed engaged in one of the worst bloodlettings of
the postwar era. Over the course of several months in 1965-66, they
slaughtered members of the Indonesian Communist Party (PKI) along with
members of loosely affiliated organizations (women's groups, labor
unions, etc.). While Indonesia's holocaust does not meet the strict
guidelines of the genocide convention, the scale and nature of the
killing spree were undoubtedly genocide-like, similar to the bulk of the
Khmer Rouge's crimes in Cambodia. Amnesty International estimated "many
more than 1 million killed." The head of the Indonesian state security
system approximated the toll at half a million, with another 750,000
jailed or sent to concentration camps. The American political
establishment welcomed the slaughter and the emergence of Suharto's New
Order, with Time hailing it as "the West's best news for years in
Asia."

The United States had effectively helped to lay the groundwork for the
military's seizure of power through its interference in Indonesian
affairs and support for the military over the years. Washington had also
long urged the military to move against the PKI. Accordingly, it
supplied weaponry and telecommunications equipment, as well as food and
other forms of aid, to the Indonesian Army in the early weeks of the
slaughter. The American embassy also provided the military with the
names of thousands of PKI cadres who were subsequently killed.

About ten years later, the Indonesian Frankenstein that Washington had
helped to create decided to invade Indonesia's tiny neighbor of East
Timor. Rather than just looking away, as Power incorrectly reports in
her one reference to East Timor, Washington aided and abetted an
international crime of aggression. While this has long been alleged, the
recent release of formerly classified documents by the Washington-based
National Security Archive now proves that then-President Gerald Ford and
Henry Kissinger, his foreign policy czar, gave Suharto the green light
for the December 7, 1975, invasion while meeting with him the previous
day. Over the following quarter-century, various US administrations
provided billions of dollars in weaponry, military training and economic
assistance to Jakarta during its more than two decades of occupation.
And in the early years of the slaughter, a time described by an
Australian government body as "indiscriminate killing on a scale
unprecedented in post-World War II history," Washington took
concerted steps to insure that the UN did not take effective action to
end Indonesia's annexation. The result was the death of well over
200,000 East Timorese, about one-third of the preinvasion population.

And, finally, Guatemala. There, more than 200,000, most of them
indigenous Mayans, lost their lives in the context of a brutal conflict
between a US-backed military oligarchy and a guerrilla force during the
1970s and '80s. The 1999 report of the internationally supported
Guatemalan Commission for Historical Clarification concluded that the
state was responsible for over 90 percent of the deaths and had
committed "acts of genocide." The commission also found that American
training of members of Guatemala's intelligence apparatus and officer
corps in counterinsurgency "had significant bearing on human rights
violations."

Because Samantha Power excludes cases like these from her analysis, she
seems to have little problem endorsing American global dominance and, on
the basis of such, calling for the United States to take the lead in
battling genocide. At the very end of an excellent chapter on the grisly
slaughter by Bosnian Serbs at Srebrenica, for example, Power lets
Senator Bob Dole explain why the United States finally became involved
in helping to end the terror in Bosnia. "Because we happen to be the
leader of the world," Dole stated.

Clearly there is a problem with Washington taking the lead in fighting
something it has helped to perpetrate on numerous occasions, and for
which it has never atoned, apart from a halfhearted admission of
wrongdoing (but not an apology, by Clinton in the case of Guatemala).

Simply because the United States has been complicit in gross atrocities
in the past does not mean, of course, that it is therefore incapable of
doing good, if even for the wrong reasons. But it does mean that we
should remain extremely skeptical of American leadership on the global
stage. As the current Palestinian-Israeli conflict painfully
demonstrates, what Washington calls American leadership is, as often as
not, unilateralist, bullying, obstructionist. All of these manifest
themselves in Washington's acceptance of Israel's flouting of
international law regarding its ongoing occupation and dispossession of
the Palestinian people. The United States has long been a principal
obstacle to an internationally acceptable solution, and it has done what
it can to prevent a multilateral approach to resolving the conflict.
Such antipathy toward international law and political institutions means
that "genocide prevention" could turn out to be just another instrument
in Washington's empire-maintenance tool kit.

If one of the main objectives of Power's book is to get the United
States to take a more active role in ending mass slaughter, surely it
would seem to be more efficacious--as well as principled--to begin by
scrutinizing cases in which the United States has been directly
involved. In this regard, her appeal to the American political
establishment on the basis of morality and enlightened self-interest
(genocide, she argues, causes regional and international instability,
something bad for the United States) is ill conceived. Ending
Washington's role in the slaughter of innocents requires struggling
against American militarism and unilateralism, as well as against
Washington's refusal to submit to international security and legal
mechanisms that would have even a remote possibility of holding US
officials accountable. The US refusal to sign on to the recently
established International Criminal Court and to cooperate with efforts
by a number of countries to question Henry Kissinger regarding various
international crimes is merely the latest manifestation of such
obstructionism.

This is not to suggest that if we could get the American house in order,
the world would be fine. As Power's book shows, there are plenty of
"evildoers" to go around. Something must be done to stop them, yes, but
it should be a truly international project. The best place to start is
at home, but not by first and foremost asking Washington to intercede
abroad. Demanding a US foreign policy consistent with international law
and human rights standards, as well as international accountability for
American officials who may have engaged in war crimes and crimes against
humanity, is the first step. Doing so will also increase the likelihood
of international cooperation in cases championed by Washington.

Finally, it is not obvious why mass killing that falls under the rubric
of genocide should be paramount in terms of international prevention and
adjudication. Power does not claim this explicitly, but it is a fair
conclusion to draw given that she does not discuss other terrible crimes
against humanity that result in massive loss of life. Why, for example,
should Serbian crimes in Bosnia be more worthy of scrutiny and demands
for accountability than, say, the US war against Vietnam, which caused
the deaths of 2-3 million civilians? In this regard, we must be
careful that the need to suppress and seek justice for genocide does not
prevent us from seeing all mass killings of civilians, no matter who
commits them, as unacceptable, and from acting accordingly.

"The original inspiration for The New Intifada," explains Roane
Carey in his foreword to this volume, "arose out of disgust at the
mainstream media's consistent misrepresentation of the basic facts of
this uprising." To "correct the balance," Carey, The Nation's
copy chief, assembled an impressive array of essays for this collection,
which aims to illuminate the myriad failings of the Oslo Agreements,
describe the struggles of the current peace movement, deconstruct the
media coverage of the Middle East and reveal the experiences of
Palestinians living under Israeli occupation before and during this new
intifada.

Palestinians, Israelis, Americans and others ("voices rarely tolerated
in the US media") have contributed to this volume; some are well-known,
like Edward Said, Noam Chomsky and Robert Fisk, while others are less
so, though no less important. Harvard research associate Sara Roy writes
about the Palestinian economy, which, compared with those of other
states in the region, is weaker now than it was in 1967. Egyptian
novelist Ahdaf Soueif shares a diary of her first visit to Israel, a
place she never intended to go: "My life," she writes, "like the life of
every Egyptian of my generation, has been overcast by the shadow of
Israel." Photographs separate the sections of The New Intifada,
and give a sense of the devastated landscape and people this book brings
to light.

In an essay from 2000 reprinted here, Said asks, "Why is it that more
Israelis do not realize--as some already have--that a policy of
brutality against Arabs in a part of the world containing 300 million
Arabs and 1.2 billion Muslims will not make the Jewish state more
secure?" Despite the efforts of Carey, his contributors and others, a
year and a half later, the question still stands.

The Past Ahead of Us

"History," wrote James Baldwin, "does not refer merely, or even
principally, to the past. On the contrary, the great force of history
comes from the fact that we carry it within us, are unconsciously
controlled by it in many ways, and history is literally present
in all that we do." Citing this as a starting point, historian and
Nation editorial board member Eric Foner goes on to note, "There
is nothing unusual or sinister in the fact that each generation rewrites
history to suit its own needs, or about disagreements within the
profession and among the public at large about how history should best
be taught and studied." He assembles a set of essays primarily taken
from events in his life over the past decade--it's a personal book in
this regard--including accounts of his experience in two societies
grappling with deep historical change, Russia and South Africa. All
investigate the relationship between the historian and his or her world.
Since much of Foner's own work has centered around Reconstruction, many
of the essays broach that subject and the effects on race relations to
this day (he takes on Civil War documentarian Ken Burns and the cult of
nostalgia in this context).

Overall, much of Who Owns History? stands as an argument for
public engagement, and touches on issues such as globalization, social
reconciliation and national identity. "'American' is what philosophers
call an 'essentially contested concept,'" Foner observes, and he
cautions in his chapter on "American Freedom in a Global Age" that, in
the shadow of the Reagan revolution, "the dominant constellation of
definitions seems to consist of a series of negations--of government, of
social responsibility, of a common public culture," amid the tightening
web of economic and cultural ties termed "globalization." Foner says
that "the relationship between globalization and freedom may be the most
pressing political and social problem of the twenty-first century."

Devotees of "balanced," "objective," "fair" and "evenhanded"
nonfiction--well, they be hurtin' in these early days of the
twenty-first century. Enough, perhaps, to demand that self-help, how-to
and "wisdom of menopause" books return to dominate, as they once did,
the now separated-from-birth (and diet and crosswords) New York
Times
nonfiction bestseller list. In the
April 21 issue of the Sunday New York Times Book Review, nearly
half the top ten nonfiction bestsellers belong to a genre that
middle-of-the-road innocents might label "one-sided," "unbalanced,"
"exclusionary" or worse, though the Times's blurbs artfully avoid
the issue.

Michael Moore's Stupid White Men, which manages the non-Euclidean
trick of being centrifugally one-sided, denounces us as a racist, sexist
"nation of idiots" even though we're plainly not a nation of idiots.
Whether you love Moore for blasting the "Thief-in-Chief" or adore him
for bashing Clinton and paying dues to the NRA, he's still guilty, as
Ben Fritz's stiletto review in Salon demonstrated, of being "One
Moore Stupid White Man," because "Moore gets his facts wrong again and
again, and a simple check of the sources he cites shows that lazy
research is often to blame."

David Brock's Blinded by the Right castigates the conservative
movement, which Brock recently fled, as "a radical cult" bored by ideas
and committed to a "Big Lie machine that flourished in book publishing,
on talk radio and on the Internet through the '90s." Brock insists on
that even though many conservatives believe in right-wing principles as
honestly as leftists and liberals believe in theirs. While it was lauded
by Frank Rich as "a key document," by Todd Gitlin as a book that "rings
with plausibility" and in these pages by Michael Tomasky as essential to
understanding this "fevered era," its credibility on the left seems
largely based on Brock's hawking a story the left wants to hear, just as
the right thrilled to The Real Anita Hill: that a "convulsed
emotional state," as Tomasky construes it, rather than an ideology, "is
the real binding glue among the right." Despite Brock's repeated
acknowledgments that he's been an unscrupulous, self-serving liar
throughout his life, flatterers of his book give little credit to the
possibility voiced by Slate's Timothy Noah that lying may be "a
lifelong habit" for the author. Bernard Goldberg's Bias, in turn,
offers mirror-image goods to true believers on the right: chapter and
verse on how his old employer, CBS News, and the media in general,
"distort the news" in a liberal direction, even though the media, by and
large, do not distort the news--they report it. On the strength of one
purported conversation with CBS News president Andrew Heyward, however,
and his own epiphanic experience after writing an anti-CBS Op-Ed for the
Wall Street Journal, Goldberg sounds certain that he's packing
smoking guns. No matter that he fails to clarify, in case after case,
how "bias" differs from a presumptive judgment held on the basis of
revisable evidence, or why conservative bias poses no problem within
eclectic media.

Finally, Kenneth Timmerman's Shakedown, another targeted killing
by the only national publishing house with the reflexes of a helicopter
gunship, leaves Jesse Jackson barely breathing as a political player.
But if fairness ruled the world of book manuscripts, this one would have
swelled to far more than 512 pages. Because while Rod Dreher of The
National Review
complimented the author for "collecting the dossier
on Jackson between two covers," a dossier in court or an academic
department typically contains both good and bad. The Washington
Post
's Keith Richburg, crediting Timmerman's "meticulous research,"
rightly noted that the author also wholly ignores "Jackson's
accomplishments," like his registration of millions of new voters.

So is Moore a direct literary descendant of Adolf Hitler, that
over-the-top idea man whose snarly diatribes grabbed Publishers
Weekly
's number-seven bestseller slot for 1939? Will self-confessed
"right-wing hit man" Brock--political sex-change operation or not--be
remembered as an heir to the legacy of Barry (Conscience of a
Conservative
) Goldwater? Should Timmerman, whose Shakedown
batters Jesse so badly his reproductive equipment may never recover, be
considered just another scion of Victor Lasky, whose ferociously
critical attack on John F. Kennedy awkwardly arrived in 1963? And what
of Goldberg, our redemption-minded spy who came in from the ill-told?
Will his Bias someday be taught in the Columbia publishing course
alongside that 1923 bestseller, Emile Coué's Self-Mastery
Through Conscious Auto-Suggestion
, whose system apparently involved
repeating to oneself, "Every day, in every way, I am getting better and
better"?

Yes, Flannery O'Connor was right: "There's many a best-seller that could
have been prevented by a good teacher." Each of these polemics keeps
rolling as a big commercial success for its publisher, even though, by
any standard of evenhandedness, each practices the big lie by what it
omits. Are they skyrocketing hits because they're tantamount to "big
lies," texts unwilling to address contrary views?

Maybe we've entered an era in which publishers and readers no longer
care about two hands working at complementary tasks--about evidence and
counterevidence, arguments and counterarguments, decency toward subject
matter. One way to interpret the ascent of the Feckless Four is to
accept that literary producers and consumers think we should leave all
that to college debating societies, scholarly journals and books,
newspapers of record and the courts. That's truth territory--this is
entertainment. And could that actually be the crux of the putative
trend? The recognition, by publishers, buyers and canny trade authors
alike, that well-balanced, evenhanded, scrupulously fair nonfiction
books bore the hell out of readers, however many prizes they may win?

Perhaps, in other words, the rise of the polemic is not simply a passing
curiosity, a reaction to political correctness cutting both ways in 2002
America, but a stage of evolutionary development in a post-
eternal verities culture. Educated readers--whether right or
left--hunger for books that simply smash the opposition and make one
feel the only sensation sweeter than orgasm: the sense of being utterly,
unimpeachably right. To update an old saw by publisher William Targ, too
many people who have half a mind to write a nonfiction bestseller do so,
and that's roughly the amount of brainpower the reader desires.

It certainly feels as if we're facing an epiphenomenon of the moment, an
upshot of the electorate we saw polarized on that red and blue 2000
electoral map. And yet, over the decades one spots many precursors of
Moore, Brock, Goldberg and Timmerman (a crackerjack adversarial firm
that might cost hundreds per hour if journalists billed like lawyers).
Michael Korda's recent Making the List: A Cultural History of the
American Bestseller, 1900-99
(Barnes & Noble), suggests
that curators of American bestseller lists could have put up the neon
Onesided Books 'R' Us sign long ago. Diet books, medical guides, how-tos
and self-improvement schemes, after all, ritually command readers to do
it this way, not that way. Dale Carnegie made it to the list with How
to Win Friends and Influence People
, not How to Win Friends,
Influence People and Also Estrange a Ton of Other Folks
. Books by
political candidates advancing their platforms may not sizzle with
Moore's streety phrases or Brock's inside snitching, but they slant the
truth just the same. Similarly, the titles of leading bestsellers of the
1930s--Ernest Dimnet's What We Live By, Walter Pitkin's Life
Begins at Forty
and Walter Duranty's I Write as I
Please
--suggest unshakable points of view promised and delivered.
Even in that war-dominated decade, one sees the forerunners of today's
divided left/right list, with Mission to Moscow, which offered,
Korda writes, a "benevolent view of Joseph Stalin," coming in second on
the 1942 bestseller list, while John Roy Carlson's Under Cover,
"an expose of subversive activity in the United States," rose to number
one in 1943. Yet, Korda observes, while Americans favor books that
"explain to them what is happening," they "still want to be amused,
entertained, and improved." So when authors like Moore, Brock, Goldberg
and Timmerman bring added assets to their unbalanced texts--Moore's
over-the-line wit, Brock's salacious gossip, Goldberg's hate-the-media
vibes and Timmerman's avalanche of dirt--it's like attaching an extra
rocket to the binding.

The presence of one-sided books on bestseller lists, in short, is no
fleeting phenomenon. It's a tradition. But might their increase threaten
the culture? Not likely. Here an insight from Korda fuses with a larger
appreciation of how philosophy in the broadest sense--the way we
organize what we know into views that hang together--operates in
American culture.

Korda extrapolates from bestseller history that "American readers have
been, since the 1940s, increasingly willing to be challenged and even
attacked. They might not have been eager to accept these challenges in
person...but they were willing to buy and read books that criticized the
status quo." He cites fiction as well Laura Hobson's novel
Gentleman's Agreement (1947), with its critique of anti-Semitism,
and Sloan Wilson's The Man in the Gray Flannel Suit (1955), which
eviscerated the "white upper-middle-class lifestyle." It's equally true
that American bestsellers from the beginning sometimes set themselves
against a prevailing yet vulnerableview. Tom Paine's Common Sense
took off and became common sense after he insulted George III and monarchy
the way Moore zaps George the Second, and, well, monarchy.

Korda's insight jibes with a larger truth. Our growing readiness not
only to tolerate but to prefer lopsided views of things arises from our
gut-level understanding that America, at the dawn of the twenty-first
century--and contrary to its clichéd cultural image--stands as
the most vibrant philosophical culture in the history of the world, an
unprecedented marketplace of truth, argument, evidence and individuated
positions on sale to any browser with a browser. Anyone with a pulse and
a laptop can access material supporting the right, the left, the up, the
down, the Israeli view, the Arab view, the Zoroastrian, the pagan, the
poly, the foundationalist, the nonfoundationalist, the libertine, the
puritanical, the environmental, the deconstructionist, the Lacanian, ad
infinitum. That reservoir of opinions, attitudes and slants lifts our
tolerance for one-sidedness into an appetite for edifying entertainment.
Because we can order or click our way to the other side of almost any
viewpoint, and can get it wholesale or retail, we forgive omissions. In
our cornucopia culture, only diners have to offer everything.

TV executives, of course, knew from early on that brash, partisan
talk-show hosts would outrate scholarly balancers every time. (The talk
show, from Alan Burke and Joe Pyne to Bill O'Reilly, has mainly been an
exercise in getting someone to scream uncle.) So, in turn, canny
commercial publishers know that supplying "the other hand" can safely be
left to the equally one-sided polemicist around the corner, or to the
culture at large (particularly if the status quo is the "position"
omitted). The nonfiction polemic, like provocative theater, demands an
interactive audience member who'll supply or obtain elsewhere whatever's
missing, up to the level of individual need. The upshot of rampant
American pluralism, if not neatly packaged truth or beauty in marketable
texts, is an unburdening of public intellectuals and trade authors from
the academic obligation to be fair, judicious and open-minded. Like
artists, they're simply expected to arouse.

It's an unholy system, all right. A typically American market solution
to our supposedly innate demand for equity in the pursuit of knowledge.
But it's ours. And the big bucks it produces for paperback and foreign
rights? Don't even ask.

Is this it? The end of the Oprah Book Club as we know it?

It's Thursday, April 4, at approximately 3:45 pm. In less than
twenty-four hours, virtually everyone in America will have received word
of Oprah Winfrey's abrupt decision to cancel her televised book club,
but now, as member

number 251 in a select studio audience of about 300, I find myself privy
to this news before it has broken over the general populace. It is with
no small sense of irony that I find myself here at this unforeseeably
historic taping. For one thing, I don't even own a TV and have had
little direct exposure to The Oprah Winfrey Show up until this
moment. For another, I'm here not because I'm a fan but because I'm
hurrying to finish my lengthy English thesis on the impact of the Oprah
Book Club on American literary culture. In fact, my very arrival here at
Harpo Studios played out something like a game of six degrees of
separation, starting during a thesis-writing seminar last fall when a
friend and fellow student mentioned that her mother's cousin's friend
knew Oprah's makeup artist, and would I like help getting tickets.

Now--countless e-mails, multiple phone calls and several months later--I
have come to Chicago's West Loop from Washington this very morning
expecting to receive a typical and formulaic book-club-segment
experience. I plan to take a few notes, write a nice, anecdotal
first-person account of the whole thing upon my return home and be done
with it. Still, along with every other polite, neatly dressed guest
present, I gasp with pure, unstaged shock when, immediately after
returning from a commercial break, Winfrey stands up and declares, "I
just want to say that this is the end of the book club as we know it."

I sit stunned in my seat listening to the rest of her official statement
that will air during her regularly scheduled program on Friday, the
statement in which she explains before the cameras that "the truth is,
it has just become harder and harder for me to find books on a monthly
basis that I am really passionate about." I hear from Winfrey--as will
anyone else who watches the show, listens to the soundbites or reads the
papers--that "I have to read a lot of books to get to something that I
really passionately love, so I don't know when the next book will be. It
might be next fall or it could be next year. But I have saved one of the
best for last. It's one of my all-time favorites, and we'll be
discussing this selection as usual in about a month. So my final
selection is Sula. Sula, by my favorite author, Toni
Morrison." Unlike most other people who will hear this quote bandied
about the press for weeks to come, from my position, dead-center in the
third row, I have the advantage of hearing those parts of Winfrey's
explanation that will not make the TV edit.

I hear her say during one of the final commercial breaks that six years'
worth of book club has been long enough for her, that having to read so
many contemporary novels with an eye toward picking one for the show is
just too much pressure in conjunction with everything else she has to
do, and that she wants to take time now to return to the classics. I
hear her say that she spent the previous weekend rereading The Great
Gatsby
, a title to which the audience responds appreciatively with
knowing oohs, ahhs and nods.

Back on the air again at a few minutes before 4 o'clock, an assortment
of staffers pass out copies, both hardcover and paperback, of the final
selection. Winfrey reminds all of us in the audience and, of course,
everyone watching at home, "After you read it, write me a nice letter. A
great Toni Morrison-worthy letter, OK, because in the end she's
going to see your letters too," before laughing, thanking us and
plunging into the well-mannered crowd herself to help with the
distribution of books. The cameras are rolling as I receive my copy of
Sula straight from Winfrey's hand; I could reach up and touch the
sleeve of her fuzzy, pale blue sweater or the crease of her tailored
gray trousers were I so inclined. By slightly after 4 , the show is
over. The books have all been handed out, but Winfrey sticks around, as
is her habit, to chat with the audience after hours. It is during this
unaired window of time that Winfrey's fans have the opportunity to tell
their heroine what's on their minds. It is during this time, too, that I
witness the saddest part of my in-studio experience, sadder even than
Winfrey's initial announcement, sadder because it is heartfelt and
wholly unorchestrated.

Rising before posing her question, as we were instructed to do at the
beginning of the taping, a well-spoken middle-aged woman in a periwinkle
blue shirt addresses Winfrey. I do not catch her name because she is
speaking quickly and earnestly, and I couldn't record it anyway because
writing materials are not allowed. I do catch that she is a former
English teacher, a current mother and homemaker, and a longtime fan of
the Oprah Book Club. As such, she thanks Winfrey for having done so much
for reading and literature. Then, standing unselfconsciously in front of
us all, she pleads with Winfrey not to stop now. Recalling Winfrey's
rereading of The Great Gatsby and desire to return to the works
of dead authors, she wonders if it might be possible to continue to
include literature in the show's format by, say, hosting a themed
dinner, throwing a Roaring Twenties party or inviting a Fitzgerald
professor to say a few words about the works of F. Scott. There's
something strange and desperate and true in her plea, and I want so
badly for Winfrey to assent. Instead, Winfrey explains that she just
wants to be a "normal reader" for a while, and that although she and her
staff certainly considered such alternatives, the likelihood that any of
them could ever take place is slim. She does not want, she says
laughing, to have to read and select classic novels on the basis of
their potential for an accompanying dinner. By a quarter after 4, the
discussion turns from the announcement entirely. At approximately 4:30,
Winfrey announces that she must take her leave. Without another word
about the cancellation of the club, she's gone.

Filing from my section to the studio exit, I can't help considering that
this unexpected last chapter in the story of the Oprah Book Club is not
dissimilar to the kind of secret or surprise divulged in a number of the
novels that were her book club picks. Unlike the best of the Oprah
selections, though, this story seems to have a highly unsatisfying
conclusion. Nonetheless, it is done, and it seems a shame that the club
was never discussed as the rich cultural phenomenon that it really was,
but rather, as is typical of so much contemporary cultural commentary,
almost exclusively in terms of commerce. In fairness, each and every one
of Winfrey's forty-eight selections over the past six years became a
bestseller, and in an industry in which only a few novels sell more than
30,000 copies, the fact that those recommended by Winfrey routinely sold
a million or more secures the club's status as an undeniable economic
marvel.

Still, even when the opportunity for broad-based exploration of the club
arose, as in the case of last fall's dust-up with Jonathan Franzen,
reductive high-versus-low cultural bickering seemed the only result. Now
that the club is over, perhaps we can examine the story of the Oprah
Book Club with the care we would devote to the analysis of any complete
story.

More than anything else, we'll find that the club was not just extremely
significant, hopeful and positive as a development but was actually a
revolutionary cultural event. The use of such a far-reaching television
program--The Oprah Winfrey Show charts a domestic audience of an
estimated 26 million viewers per week, plus a foreign distribution in
106 countries ranging from Afghanistan to Zimbabwe--as a deliberate
means to such a flourishing literary end was unheard-of before Winfrey.
More than any other cultural authority, Winfrey made an almost
subversive use of television, a categorically "low" medium, to bridge
the high-low cultural chasm that cleaves the American literary
landscape. Thus, Winfrey fought the good fight for literature in America
by promoting an enormous and active readership, racking up her
victories--succeeding with grace and ease in the creation of new readers
where the book industry itself had failed. Indeed, her widely inclusive
televised discussion of books had millions of people reading within the
club and outside it. Typically, by the time a book club segment
appeared, more than 500,000 people had read at least part of the novel
and nearly as many would buy the book in ensuing weeks. Moreover, the
club resulted in people reading titles other than those featured on the
show. According to Bob Weitrach, director of merchandise at Barnes &
Noble, 75 percent of the people who bought a book club title bought
something else too. And even though there are some who would say--and
who did say--that the revolution should not have been televised, they
were, quite simply and sadly, wrong, and now we're seeing the cost of
their snide, misguided complaints.

Whatever else can be said about the Oprah Book Club--that it
superficially treated fictional works as Things That Really Happen; that
the narratives of the books themselves were flattened by the pandering,
shallow narrative of the television program; that it drew an inordinate
amount of attention to the personalities of the authors--the reality
that it offered or came close to offering a third way of sorts between
America's high and low cultural literary camps cannot be denied. By
providing substantial evidence that such arbitrary and binaristic
classifications as high and low may actually have the same limits,
boundaries and scope, the Oprah Book Club presented a way to begin
healing the senseless rift in American literary culture.

Paradoxically, within Oprah's success rested the very problem so many
people had with the book club, and that led to its untimely demise. For
as Richard Lacayo noted in Time, "Culture snobs who thought of
her as that mawkish woman who was always on a diet now think of her as
that mawkish woman on a diet who has got millions of people to read Toni
Morrison." In short, even though Winfrey's position as a major arbiter
of literary taste was undoubtedly established, her right to hold that
position in the first place was subject to a great deal of unabashed
public doubt. As C. Wright Mills observed, virtually all taste is
dictated, if not by recognized cultural authorities at the so-called
top, then from somewhere. All reviewing of or advocacy for a particular
book--whether it appears on the book's jacket, in The New York Times
Book Review
or wherever else--may be construed as suggestion or even
a subtle form of coercion from those in positions of cultural
superiority to those at lower levels. Worthy of note, too, is the fact
that most people seem fairly comfortable with this long-established
tradition of how we, the public, are told how and what to read by
various powers that be, many of whom are perceived as members of some
kind of specialized literary class.

A reasonable question, then, becomes why widespread signs of discomfort
surfaced only when said power manifested itself in the form of a
middle-aged black woman and, more precisely, a middle-aged black woman
with lots and lots of money (her net worth is estimated at $425
million). For even though Winfrey picked a multitude of critically
acclaimed books (including Toni Morrison's Nobel Prize-winning
Song of Solomon and Jane Hamilton's PEN/Hemingway-winning
The Book of Ruth), her picks still managed to be subject to
critical scorn once they had received her approbation. In short, Winfrey
books exhibited an inversely proportional relationship between their
cultural capital--low--and their economic capital--high. The critical
backlash against the selections of the club presented unfortunate proof
of how caught up in a kind of textbook hierarchy of legitimacy American
literary culture really is.

Indeed, in large part because Winfrey selected titles with an eye toward
both their literary merits and their ability to go over well with an
audience consisting chiefly of women between the ages of 18 and
54--which women, incidentally, purchase and read more than 70 percent of
the fiction sold in this country--the club was perceived as an easy
target, open to countless cheap shots. I'm not suggesting here that all
the Winfrey-selected books of the past six years--thirty-five of them by
women and thirteen of them by men--were brilliant, nor that there should
be no distinction drawn between top- and poor-quality literature. What I
am suggesting, having read the majority of the novels myself, is that
Winfrey's picks proved that readable literature is not by definition
unchallenging or unworthy of both popular acclaim and critical respect.
Put another way, for every stray inferior club pick, like The Pilot's
Wife
, there were multiple superior club picks, like The
Poisonwood Bible
. Moreover, Winfrey continued to move the club in
increasingly challenging directions right up to the bitter end, picking
such serious and demanding works as Rohinton Mistry's A Fine
Balance
and Franzen's The Corrections. The disinvitation
fiasco--wherein Franzen insulted Winfrey and she, in turn, canceled his
appearance on the show--could have served as a tremendous asset to the
club, the literary community and the country. Instead, it became a
liability, a disheartening battle of egos between its figureheads and
led to attendant galvanization along the lines of high culture versus
low among the population at large. Owing in no small part to this highly
publicized challenge to her cultural authority, Winfrey seems to have
come now to the conclusion that the club is just no longer worth it if
it means being exposed to such derision.

None of this alters the fact that while it lasted, the club was an
unquestionably encouraging phenomenon, indicative of an American impulse
toward intellectual self-improvement and a hunger for the kind of
seriousness and stimulation that good literary fiction can offer. Such a
story as that of the Oprah Book Club should not suffer from so weak an
ending. The closing of the book before a satisfactory denouement
represents a tremendous loss to the promotion of active
readership.

Blogs

What are interns reading for the week of 12/05/14?

December 5, 2014

Americans today are a lot more familiar with his presidency than they think they are.

December 3, 2014

Eric on this week’s concerts and Reed on how, from Bill Cosby’s victims to drone strikes, the media refuse to protect the powerless.

December 2, 2014

What are interns reading for the week of 11/20/14?

November 21, 2014

Eric on this week in concerts and new music releases and Reed on how the mainstream press is always trying to tell the same (false) story about the Republican Party.

November 17, 2014

What are interns reading for the week of 11/13/14?

November 14, 2014

Twenty-five years ago, Eric Alterman reminded us that the so-called “experts” were wrong about the Berlin Wall. Some things haven’t changed much.

November 12, 2014

Need some reading suggestions? See below for recommended reads from our brilliant group of fall interns:

November 12, 2014

The best way to support the troops is to make every effort to keep them out of harm's way.

November 10, 2014

“Socialism is inseparable from democracy,” The Nation wrote in its 1989 editorial.

November 10, 2014