Help

Nation Topics - Books and the Arts

Topic Page

Nation Topics - Books and the Arts

Subsections:

Arts and Entertainment Books and Ideas

Articles

News and Features

For readers of this magazine and millions of other Americans, the
initial horror of September 11 was compounded by the sobering
realization that George W. Bush would be at the helm for the aftermath.
With a cabal of fundamentalists, crackpots and fascists whispering in
his ear, Dubya became the world's most dangerous weapon. Perhaps, we hoped, the rather low esteem in which he was held by the American people, the news media and much of Congress might save us.

No such luck. Congress and the mainstream media lined up behind him in
lockstep. Instances of his much-vaunted ignorance wound up on the
cutting-room floor. One cable network ran daily promos of Bush spurring
on World Trade Center rescue workers, declaring that he had "found his
voice" amid the rubble. Pundit Peggy Noonan declared Bush's post-9/11
speech to Congress no less than "God-touched"; he had "metamorphosed
into a gentleman of cool command...[with] a new weight, a new gravity."
Yet, despite the rise in his approval ratings, many harbored lingering
doubts about the extent to which a "new" Bush existed.

Among the many critical viewpoints drowned out in the wake of the
attacks was Mark Crispin Miller's The Bush Dyslexicon, the first
systematic critical examination of the President's mistakes,
misstatements and malapropisms. Fortunately, this clever volume has been
reissued with updated material on Bush's sayings and doings since that
time.

Bush's propensity for mangling the English language is no secret to
anyone. No doubt we all have our favorites, which we've gleefully shared
with friends, family, co-workers and comrades. Miller, a professor of
media ecology at New York University, has compiled what is clearly the
largest collection of Dubya-isms to date, among them these treats:

§ On his qualifications to be President: "I don't feel I've got all
that much too important to say on the kind of big national issues"
(September 2000); and "Nobody needs to tell me what I believe. But I do
need somebody to tell me where Kosovo is" (September 1999).

§ On coping with terrorism and other threats: "[We'll] use our
technology to enhance uncertainties abroad" (March 2000); and "We'll let
our friends be the peacekeepers and the great country called America
will be the pacemakers" (September 2000).

§ On Russia: "And so one of the areas where I think the average
Russian will realize that the stereotypes of America have changed is
that it's a spirit of cooperation, not one-upmanship; that we now
understand one plus one can equal three, as opposed to us, and Russia we
hope to be zero" (November 2001).

Miller vividly illustrates the depth of ignorance--as opposed to
stupidity--that leads this President away from direct contact with
journalists whenever possible. Miller also demonstrates that Bush's
"problem" with language is not easily separated from his "problem" with
policy and politics. If we focus exclusively on his stormy relationship
with proper grammar and logical sentence structure, Miller argues, we
risk underestimating what his presidency means for the United States and
the world. "Our president is not an imbecile but an operator just as
canny as he is hard-hearted.... To smirk at his alleged stupidity is,
therefore, not just to miss the point, but to do this unelected
president a giant favor."

Loosely organized by subject matter-- "That Old Time Religion," "It's
the Economy, Your Excellency"--the book's chapters chronicle several
intertwined aspects of the chief executive: the politics of style that
characterize his behavior and demeanor; the media's role in crafting him
as a valid presidential candidate and, post-9/11, a changed man; the
Bush family's political legacy and troubled public image; and, finally,
the real meaning behind Dubya's flubs and gaffes.

Miller documents in detail how major news outlets have from the
beginning provided a heavily edited public transcript of Bush's
statements and have helped steer viewers away from his lack of policy
knowledge. Even more disturbing are the ways the media have simply
reported Bush's "ideas" without comment. Commenting on a Kansas
school-board vote to end evolution's exclusivity in the state science
curriculum (later overturned), for example, Bush declared, "I personally
believe God created the earth" (September 1999); later, he opined,
"After all, religion has been around a lot longer than Darwinism"
(September 2000).

The abundant evidence Miller provides of Dubya getting pass after pass
in the media seems particularly alarming. In addition to general
"cover," Cokie Roberts, Sam Donaldson and other famed "journalists" and
newspeople consistently let Bushisms fly with little or no comment. Note
this flub on the fate of Elián González's potential
citizenship during an airing of ABC's This Week:

Well, I think--I--It--listen, I don't understand the full ramifications
of what they're going to do. But I--I--I--think it'd be a--a--a
wonderful gesture. I guess the man c--the boy could still go back to
Cuba as a citizen of the United States.... I hadn't really thought about
the citizenship issue. It's an interesting idea, but if I were in the
Senate, I'd vote aye.

Roberts gave no response to the nonsensical Bush, nor did Chris Matthews
in this bizarre MSNBC Hardball episode in May 2000:

Matthews: When you hear Al Gore say "reckless, irresponsible," what do
you hear from him, really?...

Bush: I hear a guy who's not confident in his own vision, and,
therefore, wants to take time tearing me down. Actually, I--I--this may
sound a little West Texan to you, but I like it when I'm talking about
what I'm--what I--

Matthews: Right.

Bush:--when I'm talking about myself, and when he's talking about
myself, all of us are talking about me.

Matthews: Right.

Of course, these snippets pale in comparison to the alacrity with which
the media papered over the fact that our current President was not
elected by a majority of the populace.

This is quite a contrast from the dis-ease with which the fourth estate
treated Bush's predecessors. Miller traces the phenomenon back to
Richard Nixon, whom he calls the "godfather" of Bush-era politics. Like
Bush, Nixon was not a man well liked by the television cameras; nor, as
the White House tapes reveal, was he an especially enlightened man, with
his pedestrian literary interpretations, paranoid hatred of Jews,
virulent racism, sexism and homophobia. "You know what happened to the
Greeks!?" Nixon bellowed to Haldeman and Ehrlichman: "Homosexuality
destroyed them. Sure, Aristotle was a homo." Nixon's angry and, as
Miller describes it, "low-born" personality manifested itself throughout
his televisual life, particularly during the scandal that brought down
his presidency.

Inheriting this image problem was Dubya's patriarch, George Bush senior,
who not only worked for Nixon politically but also shared in his
televisually and verbally handicapped style. Whereas Nixon came off as a
classless bully, Bush suffered from sissiness, the infamous Wimp Factor:
"Bush's posh class background was his major TV problem, the cameras
mercilessly outing the big pantywaist within.... In fact, the Bush clan,
although fabulously wealthy, is not aristocratic enough to do well on
TV, if by that modifier we mean elegant and polished. First of all, the
Bushes often have let fly in the most boorish way--as when Barbara Bush
hinted coyly that Geraldine Ferraro was a 'bitch.'"

In an effort to analyze Bush Sr.'s wanna-be aristocratic demeanor,
Miller proceeds to call him a "Yalie faggot" and argues that the Bush
family's privilege put the elder Bush in the toughest of spots relative
to his macho Republican predecessors. On losing a straw poll in Ames,
Iowa, for example, Bush noted, "A lot of people who support me were at
an air show, they were off at their daughter's coming-out party, they
were teeing up at the golf course." Miller makes it abundantly clear how
frequently Bush Sr. not only missed, but miscalculated, the mark.

The point is that on television, class is not an economic issue but a
style issue. Given what Miller terms the Kennedy "savoir-faire," the
Bush family is at a distinct image disadvantage. Unfortunately, Miller
frequently analogizes Bush's moneyed privilege with a certain kind of
homosexuality--offensive behavior in a critic himself trying to "out"
Nixon's ignorance and homophobia. And he contrives that Barbara's
complaining of another woman's bitchiness is somehow anathema to
aristocratic behavior.

At root, these strangely aristocratic cheap shots smack of a kind of
backhanded liberal Kennedy worship. It is impossible to miss the
implication that America's royal family is the standard-bearer of
sufficiently presidential (read: aristocratic and classy) demeanor.
Given that JFK was an ethically challenged, commie-hunting political
lightweight, Miller's willingness to engage in macho class snobbery
points to the disturbing presence in the book of a crass partisanship
better suited to a Democratic media flack than a scholar of the left.

Symptomatic of this is the fact that for much of the book Miller seems
to forget the high degree of political convergence between Bush and
neoliberal New Democrats like Al Gore. One cannot help wondering if
Miller thinks a Gore Administration would not have responded to
September 11 with military action, and with legislation that expanded
the already egregious powers given the government in the
Clinton-sponsored Counter Terrorism Initiative of 1995. This see-no-evil
quality of the book is all the more telling because it represents the
very type of amnesia that Miller says afflicts us all after years of
corporate-led media idiocy. When he harps on Clinton's downfall at the
hands of the right without sufficiently stressing Bill's own
never-ending rightward shift throughout his eight years in office, one
wonders if Miller's own political memory lapsed from 1992 to 2000. It is
not until near the end of the book that he turns tail and concedes Al
Gore's rather striking resemblances to a war-happy Republican candidate,
as Gore "spoke more expertly, but just as deferentially, straining to
out-hawk the jut-jawed W, arguing that he would raise the military
budget even higher and retrospectively saluting the preposterous
invasions of Grenada and Panama."

Finally, Miller's critique of the "politics of style" turns in upon
itself. Miller obtains the lion's share of Bushisms from precisely those
style-obsessed media outlets he accuses of bringing down Clinton and
building up Bush: the New York Times, Talk,
Glamour, 20/20 and Larry King Live appear all over
Miller's source citations, and he is just as dependent on, and dedicated
to, the politics of style as they are. At the end of the book, one
cannot help suspecting that Miller's beef with the politics of style is
that it took down his guy while it has yet to take down the other guy.

This hedging makes crucial parts of the book read like sour grapes and
detracts from the moments of sharp observation that Miller offers
elsewhere. He clearly grasps the very real danger of the Bush
Administration--his most intriguing observation is that Bush is not
always a rhetorical bumbler. As Miller conducts his repeated dissections
of various Bushisms, it becomes clear that this man is in fact possessed
of considerable guile. In an interview with Charlie Rose, in August
2000, Bush speaks about Saddam Hussein:

Rose: OK. What if you thought Saddam Hussein, using the absence of
inspectors, was close to acquiring a nuclear weapon?
Bush: He'd pay a price.
Rose: What's the price?
Bush: The price is force, the full force and fury of a reaction.
Rose: Bombs away?
Bush: You can just figure that out after it happens.

Here we see Dubya apparently willing and even eager to bomb a country
with which we are not at war--yet. Two years before the recent
enunciation of a "Hitting First" policy of pre-emption and even more
recent revelations of an existing attack plan from land, sea and air,
Bush's warring language was unambiguous. Likewise, when speaking of
anger and vengeance post-9/11, he is nothing if not clear, and his
dyslexic tendencies are nowhere in evidence. Down-homish and
cringe-inducing though it may be, "evildoers" is a phrase whose meaning
is singular, and Bush's repeated use of it has not been subject to the
usual emendations or "clarifications" of his handlers. Similarly, Bush
famously threatened to "smoke 'em out" of their holes, another
inappropriate, unpresidential, phrase; yet no one was confused about
what it meant for Al Qaeda.

The Bush Dyslexicon makes it clear that even after the 11th of
September, Bush's personality was far from "God-touched" or even
transformed; in fact, provided with the opportunity to inflate his
defense budget, savage Social Security and go after the Taliban as if in
a 'coon hunt, Bush was just this side of gleeful at the prospect for
revenge. Hardly had the mourning American public time to collect itself
before Dubya encouraged the military to "smoke 'em out of their caves,
to get 'em runnin' so we can get 'em" in order, as Bush himself put it,
to "save the world from freedom."

Given the potentially dire consequences of Bush's post-9/11 policy
agenda, though, it seems strangely incongruous that Miller so often goes
for the breezy, snappy rhetoric and eschews a more forthrightly
analytical tone. It may be therapeutic to laugh in the face of danger,
but somehow these do not seem to be particularly funny times.

A half-century ago T.H. Marshall, British Labour Party social theorist,
offered a progressive, developmental theory for understanding the
history of what we have come to call citizenship. Taking the experience
of Englishmen to define the superior path, he postulated a hierarchy of
citizenships: civil rights, political rights and social rights. The last of these became the
category in which twentieth-century Europeans have understood claims on
the state to health, welfare, education and protection from avoidable
risk. They conceived of these citizenships as stages in an upward climb
toward an ever better democracy.

Marshall's schema looked only at European men. Feminists have pointed
out that women did not achieve citizenship in this order. In fact, women
often won some social rights--for example, protective legislation and
"welfare"--before achieving political ones such as the right to vote.
And women's individual civil rights were often overwhelmed and even
suppressed by legally imposed family obligations and moral sanctions.
(For example, a century ago courts generally interpreted the law of
marriage to mean that women were legally obligated to provide housework,
childcare and sexual services to husbands.) Equally problematic were
Marshall's obliviousness to British imperialism and what it meant for
Third World populations, including the fact that he conceived of the
British as civilizers rather than exploiters, and his apparent ignorance
of the conditions of second-class citizenship for racial/ethnic
subordinates within nation-states. In short, his historical hierarchy
was highly ideological.

But no one has yet done what Alice Kessler-Harris has in her newest
book, In Pursuit of Equity, reaching beyond Marshall and his
critics to suggest a new concept, economic citizenship. In this history
of how women have been treated in employment, tax and welfare policy,
Kessler-Harris--arguably the leading historian of women's labor in the
United States--synthesizes several decades of feminist analysis to
produce a holistic conception of what full citizenship for women might
entail. In lucid prose with vivid (and sometimes comic) illustrations of
the snarled thinking that results from conceiving of women as
dependents--rather than equal in heading families--she offers a vision
of how we can move toward greater democracy. In the process, she also
shows us what we are up against. Her book illustrates brilliantly how
assumptions about appropriate gender roles are built into all aspects of
policy.

She aims to resolve what is perhaps the central contradiction for
policy-makers and policy scholars who care about sex equality: the
contradiction between, on the one hand, valuing the unpaid caring work
still overwhelmingly performed by women and, on the other hand, enabling
women to achieve equality in wage labor and political power. Today, for
example, although all feminists oppose the punitive new requirements of
the policy that replaced Aid to Families with Dependent Children,
repealed in 1996, they are divided about what would constitute the right
kind of welfare system. Some find it appropriate that all adults,
including parents of young children, should be employed, assuming they
can get a living wage and good childcare. Others, often called
maternalists, believe a parent should have the right to choose full-time
parenting for young or particularly needy children. Behind this difference lie two different visions of
sex equality--one that emphasizes equal treatment of the sexes and individual rights
and responsibilities, another that seeks to make unpaid caring labor,
notably for the very young, the old and the ill, as honorable and valued
as waged labor.

Kessler-Harris would resolve this contradiction through a labor-centered
view of citizenship, a notion of economic citizenship based on equity,
or fairness, in the valuation of socially worthy labor. Previously, the
policy proposal closest to this principle of equity was "comparable
worth." Second-wave feminists saw that the Equal Pay Act of 1963 and
Title VII of the Civil Rights Act of 1964 had failed to equalize male
and female wages. Because the labor force is so segregated, and female
jobs are so consistently undervalued, equal pay alone cannot produce
justice to women (or men of color). The comparable-worth strategy called
for equal wages for work of comparable expertise and value, even when
the jobs differed. For example, consider the wage gap between truck
drivers and childcare workers. Truck drivers earned much more even than
registered nurses, whose training and responsibility was so much
greater. The women's movement's challenge to inequality in jobs took off
in 1979, when Eleanor Holmes Norton, then head of the Equal Employment
Opportunity Commission, called for evaluations of job skills to remedy
women's low wages. But her successor, Clarence Thomas, refused to
consider comparable-worth claims. Although some substantial victories
were achieved in state and union battles--for example, the American
Federation of State, County and Municipal Employees (AFSCME) won wage
increases averaging 32 percent and back pay retroactive to 1979 for
Washington State employees, 35,000 of whom shared a $482 million
settlement--the comparable-worth campaigns faded in the 1980s.

But even had the comparable-worth strategy been adopted, it could not
have recognized the hours spent in caring for children, parents,
disabled relatives and friends, not to mention the work of volunteering
in underfunded schools, cooking for homeless shelters, running kids'
basketball teams. Kessler-Harris is arguing for a citizenship that
respects unpaid as well as paid labor.

She has worked out the arguments in this book systematically over many
years. Several years ago, an article of hers with the deceptively simple
title "Where Are All the Organized Women Workers?" enlarged the
understanding of gendered "interests" from an exclusive focus on women
to take in men as well. She demonstrated that so long as men dominate,
aspirations understood and characterized as class interests often
express gender interests equally strongly. She uncovered how unions
often operated as men's clubs, built around forms of male bonding that
excluded women, primarily unconsciously but often consciously, too. In
this new book she extends her analysis of men's gendered interests to
reveal how labor unionists' inability to stop defending the privileges
of masculinity have held back labor's achievements. One vivid example
was unions' opposition to state-funded welfare programs and
health-and-safety regulation, stemming from anxiety that they would
deprive workers of their manly independence. Of course, unionist
resistance to state control over workplace and work-centered programs
also derived from a defense of workers' control. But this vision of
workplace democracy was inextricably masculinist, and workingmen's
understanding of their dignity rested on distinguishing themselves from
women.

In A Woman's Wage, Kessler-Harris showed that both Marxist and
neoclassical economics were mistaken in their joint assumption that the
wage was somehow a consistent, transparent token of the capital/labor
relation. By contrast, wage rates, wage systems, indeed the whole labor
market were constructed by gender interests and ideology as well as by
supply and demand or surplus value or the actual cost of subsistence. A
wonderful example from her new book: The Hawthorne experiments of the
late 1920s have been interpreted to show that women workers were more
tractable than men. In one study, a group of women workers adapted more
cooperatively and quickly to a speedup than did a group of male workers.
In seeking to explain this behavior, investigators examined the women's
home lives and even their menstrual cycles, while paying no particular
attention to the fact that the collective rather than individual wage
structure imposed on them was such that higher productivity could only
increase their total wages, while the men's piece-rate wage structure
offered no such guarantee--in fact, the men had reason to expect that
the piece rate would be lowered if they speeded up. We see here not a
"natural" gendered difference arising informally from culture and
socialization, but female and male workers responding rationally to a
gendered system imposed by employers.

In Pursuit of Equity argues that no one can enjoy civil and
political rights without social and economic citizenship. Marshall's
alleged gradual expansion of civil and political rights not only
excluded many others but actually strengthened women's exclusion from
citizenship. One fundamental premise of democratic capitalism--free
labor--was never fully extended to all women, whose labor was often
coercively regulated, not only by husbands but by the state.
Kessler-Harris shows how free labor developed in tandem with the "family
wage" ideal, that is, that husbands/fathers should earn for the entire
family and that women's destiny was domestic unpaid labor. The correlate
was that men "naturally" sought economic and social independence while
women "naturally" sought dependence. Ironically, most feminists of the
nineteenth century went along with this dichotomy and tried to root
women's citizenship in their essential family services rather than in
the free-labor definition of independence. That is, they argued for
rights on the basis of women's spiritual and material work in unpaid
caretaking labor.

The book demonstrates particularly effectively how the dominant modern
gender system--the family-wage norm--made it difficult for women to
become full citizens. In one closely documented section, Kessler-Harris
exposes the condescending and defensive assumptions of those who drafted
the Old Age Insurance program (which later became Social Security). The
drafters agreed, for example, that the widow of a covered man with young
children should be able to receive three-quarters of his pension until
she remarried or the children reached 18. A widow without children
lacked any rights to her husband's pension. But if this pension was her
husband's by right, as the designers insisted, then why were his heirs
not entitled to all of it as with all other parts of his property? If
the widow remarried, she would not have to give up the bank account or
house or car he had left her--why should she give up a Social Security
pension? One Social Security drafter argued that retaining such an
annuity after remarriage would make widows "a prize for the fellow that
has looked for it," assuming that women are entirely passive in marriage
decisions! The drafters were all convinced that "once a woman was no
longer dependent on the earnings of a particular male (dead or
alive)...his support for her should cease." In other words, his status
as breadwinner should continue even after his death. The drafters
rejected the idea of granting all widows of covered men an equal stipend
or one based on the number of children. It was important for her
benefits to be calibrated to his earnings so as to feed "the illusion
that families deprived of a father or husband would nevertheless
conceive him...as a continuing provider." "Why should you pay the widow
less than the individual himself gets if unmarried?" Because "she can
look after herself better than he can." Imagining women as less capable
of handling money than men, the designers removed the option of a
lump-sum benefit to widows, requiring them, unlike men, to receive
monthly stipends. To avoid "deathbed marriages," they allowed a widow to
collect only if she had been married and living with her husband for at
least a year before he died.

The concern with male status was reflected particularly comically in
discussions about the age at which a wife could start to receive her
share of her husband's benefits. Some argued for an earlier "retirement"
age for women because if both men and women were eligible at 65, this
would mean that men with younger wives--a common phenomenon--might not
get their full pension for a number of years after they retired. But
others argued that since men who married much younger women were more
likely to be those who had married more than once, granting women an
earlier retirement date might reward these men over single-marriage
retirees.

Several decades ago economist Heidi Hartmann pointed out that patriarchy
was as much a system of power and hierarchy among men as a male-female
relation, and Kessler-Harris confirms that insight. For example, the
entire debate about whether married couples should be able to report
separate incomes for IRS purposes concerned the inequalities this would
create between men with employed wives and men with nonemployed wives.
Fairness to women was not a prominent concern. The fact that employed
women's old-age insurance benefits were restricted according to their
marital status while men's weren't "did not seem like sex discrimination
[to the Social Security designers] but rather like equity to men."

At the core of In Pursuit of Equity is the understanding that
what is "fair" is historically changing. The problem we face today is
not that men deliberately built policies to subordinate women but that
when our basic economic policies were established, men and women alike
tended to see male breadwinning and female domesticity as "fair." That
standard is far, far from reality today. One result is a double standard
in which supposedly ideal family life, requiring a full-time mother, is
a privilege of wives of high-earning husbands.

In the United States, the resultant damage is worse than in Europe,
because here many fundamental aspects of citizenship flow from the labor
market. "Independence" today is generally defined as earning one's
living through wages, despite the fact that the resulting dependence on
employers leaves workers as vulnerable, if not more vulnerable, than
dependence on government stipends. Social rights vital for survival,
such as medical insurance, retirement pensions and workers'
compensation, typically derive from employment in this country, in
contrast to most developed countries, which provide such help as a
matter of right to all citizens or residents. This is one way in which
American wage workers, as Kessler-Harris says, were "in a different
relationship to the constitution than those who did care-giving work."
As a result the development of democratic capitalism, even the growth of
working-class power in some ways failed to strengthen women's economic
citizenship, even weakened it. Indeed, she shows how victories against
sex discrimination in the labor force in the 1960s inadvertently
confirmed the assumption that all women could and should work for wages,
thereby contributing to the repeal of welfare without creating the
conditions that would make it possible for poor women to support
themselves through employment.

This gendered citizenship became more visible and more obnoxious to
women as wage-earning became the female norm and as "alternative
families" gained political clout. For example, if every individual was
entitled to an old-age pension and unemployment compensation, we
wouldn't have to struggle about the inheritance rights of gay partners
or stay-at-home parents' need for support. Even today, banning sex
discrimination is difficult because it is difficult to get agreement on
what constitutes discrimination. In a few cases division among feminists
has held back the struggle. Kessler-Harris ends the book with a brief
reprise of EEOC v. Sears, Roebuck & Co., a 1980s marker of
this division and a case in which she herself played a significant role.
Sears admitted that very few women held any of its well-paying
commission sales jobs but argued that women were not interested in these
jobs because the positions were competitive, pressured, demanding.
Another historian of women testified for Sears against the women
plaintiffs, using her expertise to argue that women's primary attachment
to unpaid domestic labor led them to want only jobs which did not
conflict with it. Her arguments illustrated vividly the continuing
influence of this emphasis on male/female difference, not necessarily as
"natural" or essential but nevertheless beyond the appropriate scope of
legal remedy. Sears won the case.

There is one pervasive absence in Kessler-Harris's book--race--and the
omission weakens the argument substantially. Her understanding of how
the family-wage ideal works would have to be substantially complicated
if she made African-American women more central, for they were rarely
able to adopt a male breadwinner/female housewife family model and often
rejected it, developing a culture that expects and honors women's
employment more than white culture. Mexican-American women's experience
did not fit the family-wage model either, despite their reputation as
traditional, because so many have participated in agricultural and
domestic wage labor throughout their lives in the United States. Equally
problematic to the argument, prosperous white women who accepted the
family-wage model often didn't do unpaid domestic labor because they
hired poor immigrants and women of color to do it for low wages. These
different histories must affect how we envisage a policy that recognizes
labor outside the wage system, and they need to be explored.

One aspect of Kessler-Harris's economic citizenship concept is being
expressed today by progressive feminists trying to influence the
reauthorization of Temporary Assistance for Needy Families (TANF), the
program for poor children and their parents that succeeded AFDC. We are
pushing a House bill that would recognize college education and
childcare as work under the new welfare work requirements. This book is
a sustained argument for that kind of approach and should help it become
part of the policy discussion. It probably won't win. Some will call it
unrealistic. But today's policies are already wildly unrealistic, if
realism has anything to do with actual life. If we don't begin now to
outline the programs that could actually create full citizenship for
women, we will never get there.

One of the most persistent myths in the culture wars today is that
social science has proven "media violence" to cause adverse effects. The
debate is over; the evidence is overwhelming, researchers, pundits and
politicians frequently proclaim. Anyone who denies it might as well be
arguing that the earth is flat.

Jonathan Freedman, professor of psychology at the University of Toronto,
has been saying for almost twenty years that it just isn't so. He is not
alone in his opinion, but as a psychologist trained in experimental
research, he is probably the most knowledgeable and qualified to express
it. His new book, Media Violence and Its Effect on Aggression,
surveys all of the empirical studies and experiments in this field, and
finds that the majority do not support the hypothesis that violent
content in TV and movies has a causal relationship to real violence in
society. The book is required reading for anyone who wishes to
understand this issue.

I should say at the outset that unlike Freedman, I doubt whether
quantitative sociological or psychological experiments--useful as they
are in many areas--can tell us much about the effects of something as
broad and vague in concept as "media violence." As a group of scholars
put it recently in a case involving censorship of violent video games:

In a field as inherently complex and multi-faceted as human aggression,
it is questionable whether quantitative studies of media effects can
really provide a holistic or adequately nuanced description of the
process by which some individuals become more aggressive than others.

Indeed, since "media violence" encompasses everything from cartoons,
sports and news to horror movies, westerns, war documentaries and some
of the greatest works of film art, it baffles me how researchers think
that generalizations about "effects" can be made based on experiments
using just one or a few examples of violent action.

Freedman, by contrast, believes that the experimental method is capable
of measuring media effects. This may explain why he is so indignant
about the widespread misrepresentations and distortions of the research
data.

He explains in his preface that he became interested in this area by
happenstance, and was surprised when he began reading the research to
find that its results were quite the opposite of what is usually
asserted. He began speaking and writing on the subject. In 1999 he was
approached by the Motion Picture Association of America (MPAA) and asked
to do a comprehensive review of all the research. He had not previously
received organizational support and, as he says, "was a little nervous
because I knew there was a danger that my work would be tainted by a
connection with the MPAA." He agreed only after making it clear that the
MPAA "would have no input into the review, would see it only after it
was complete, and except for editorial suggestions, would be forbidden
to alter what I wrote. Of course," he says,

they asked me to do the review, rather than someone else, because they
knew my position and assumed or at least hoped that I would come to the
same conclusion after a more comprehensive review. But there was no quid
pro quo. Although I was nervous about being tainted, I am confident that
I was not. In any case, the conclusions of this review are not different
from those of my earlier review or those I expressed in papers and talks
between 1984 and 1999.

The book proceeds meticulously to examine the approximately 200 studies
and experiments that Freedman was able to find after an exhaustive
search. (He suggests that the exaggerated numbers one often
hears--1,000, 3,500 or simply "thousands" of studies--probably derive
from a statement made by psychologist John Murray in the early 1980s
when the National Institute of Mental Health sponsored a review of the
media violence research. Murray said that there were about 2,500
publications of all kinds that were relevant to the review. This is far
different, of course, from the number of empirical experiments and
studies.)

Freedman begins with laboratory experiments, of which he found
eighty-seven. Many commentators have noted the artificiality of these
experiments, in which snippets of a violent film or TV show are shown to
one group of viewers (sometimes children, sometimes adolescents or
adults), while a control group is shown a nonviolent clip. Then their
level of "aggression" is observed--or rather, something that the
experimenters consider a proxy for aggression, such as children hitting
a Bobo doll (an inflatable plastic clown), delivering a "white noise"
blast or--amazingly--answering yes when asked whether they would pop a
balloon if given the opportunity.

As Freedman and others have pointed out, these laboratory proxies for
aggression are not the real thing, and aggressive play is very different
from real-world violent or destructive behavior. He comments:

Quite a few studies with children defined aggression as hitting or
kicking a Bobo doll or some other equivalent toy.... As anyone who has
owned one knows, Bobo dolls are designed to be hit. When you hit a Bobo
doll, it falls down and then bounces back up. You are supposed to hit it
and it is supposed to fall down and then bounce back up. There is little
reason to have a Bobo doll if you do not hit it. Calling punching a Bobo
doll aggressive is like calling kicking a football aggressive. Bobos are
meant to be punched; footballs are meant to be kicked. No harm is
intended and none is done.... It is difficult to understand why anyone
would think this is a measure of aggression.

Freedman notes other serious problems with the design of lab experiments
to test media effects. When positive results are found, they may be due
simply to the arousal effect of high-action entertainment, or to a
desire to do what the subjects think the experimenter wants. He points
out that experimenters generally haven't made efforts to assure that the
violent and nonviolent clips that they show are equivalent in other
respects. That is, if the nonviolent clip is less arousing, then any
difference in "aggression" afterward is probably due to arousal, not
imitation. Freedman's favorite example is an experiment in which one
group of subjects saw a bloody prizefight, while the control group was
shown a soporific film about canal boats.

But the most striking point is that even given the questionable validity
of lab experiments in measuring real-world media effects, the majority
of experiments have not had positive results. After detailed analysis of
the numbers that the researchers reported, Freedman summarizes:
Thirty-seven percent of the experiments supported the hypothesis that
media violence causes real-world violence or aggression, 22 percent had
mixed results and 41 percent did not support the hypothesis. After he
factored out experiments using "the most doubtful measures of
aggression" (popping balloons and so forth), only 28 percent of the
results were supportive, 16 percent were mixed and 55 percent were
nonsupportive of the "causal hypothesis."

For field experiments--designed to more closely approximate real-world
conditions--the percentage of negative results was higher: "Only three
of the ten studies obtained even slightly supportive results, and two of
those used inappropriate statistics while the third did not have a
measure of behavior." Freedman comments that even this weak showing
"gives a more favorable picture than is justified," for "several of the
studies that failed to find effects actually consisted of many separate
studies." Counting the results of these separate studies, "three field
experiments found some support, and twenty did not."

Now, the whole point of the scientific method is that experiments can be
replicated, and if the hypothesis is correct, they will produce the same
result. A minority of positive results are meaningless if they don't
show up consistently. As Freedman exhaustively shows, believers in the
causal hypothesis have badly misrepresented the overall results of both
lab and field experiments.

They have also ignored clearly nonsupportive results, or twisted them to
suit their purposes. Freedman describes one field experiment with
numerous measures of aggression, all of which failed to support the
causal hypothesis. Not satisfied with these results, the researchers
"conducted a complex internal analysis" by dividing the children into
"initially high in aggression" and "initially low in aggression"
categories. The initially low-aggression group became somewhat more
aggressive, no matter which programs they watched, while the initially
high-aggression group became somewhat less aggressive, no matter which
programs they watched. But the children who were categorized as
initially high in aggression and were shown violent programs "decreased
less in aggressiveness" than initially high-aggression children who
watched neutral programs. The researchers seized upon this one highly
massaged and obscure finding to claim that their results supported the
causal hypothesis.

Freedman examines other types of studies: surveys that compare cities or
countries before and after introduction of television; experiments
attempting to assess whether media violence causes "desensitization";
longitudinal studies that measure correlations between aggressiveness
and preference for violent television over time. No matter what the type
of study or experiment, the results overall are negative. Contrary to
popular belief, there is no scientific support for the notion that media
violence causes adverse effects.

Why, then, have not only researchers and politicians but major
professional associations like the American Academy of Pediatrics and
the American Medical Association repeatedly announced that thousands of
studies have established adverse effects of media violence? One reason
was suggested to me recently by a pediatrician active in the AAP. The
organization's guidelines argue for scientific support for policy
statements. This puts the AAP in a serious bind when, as is the case
with media violence, its leaders have a strong opinion on the subject.
It's tempting then to accept and repeat assertions about the data from
leading researchers in the field--even when it is distorted or
erroneous--and that's what the professional associations have done.

Another factor was candidly suggested by Dr. Edward Hill, chair of the
AMA board, at a panel discussion held by the Freedom Forum in New York
City last year. The AMA had "political reasons," Dr. Hill said, for
signing on to a recent statement by professional organizations asserting
that science shows media violence to be harmful. The AMA is "sometimes
used by the politicians," he explained. "We try to balance that because
we try to use them also."

Because Jonathan Freedman believes the scientific method is capable of
measuring the impact of media violence, the fact that it hasn't done so
is to him strong evidence that adverse effects don't exist. I'm not so
sure. I don't think we need science to know from observation that media
messages over time can have a powerful impact--in combination with many
other factors in a person's life. Some violent entertainment probably
does increase aggression for some viewers, though for as many or perhaps
more, the effect may be relaxing or cathartic.

If the media do have strong effects, why does it matter whether the
scientific research has been misrepresented? In part, it's precisely
because those effects vary. Even psychologists who believe that the
scientific method is relevant to this issue acknowledge that style and
context count. Some feel cartoons that make violence amusing have the
worst effects; others focus on stories in which the hero is rewarded for
using violence, even if defensively.

But equally important, the continuing claims that media violence has
proven adverse effects enables politicians to obscure known causes of
violence, such as poverty and poor education, which they seem largely
unwilling to address. Meanwhile, they distract the public with periodic
displays of sanctimonious indignation at the entertainment industry, and
predictable, largely symbolic demands for industry "self-regulation."
The result is political paralysis, and an educational structure that
actually does little to help youngsters cope with the onslaught of mass
media that surround them.

The country is riven and ailing, with a guns-plus-butter nuttiness in
some of its governing echelons and the sort of lapsed logic implicit in
the collapse of trust in money-center capitalism, which has been an
undergirding theory of a good deal of the work that many people do. The
tallest buildings, real profit centers, fall, as "wogs" and "ragheads"
defy us, perhaps comparably to how the "gooks" in Vietnam did (from
whose example Osama bin Laden may have learned that we could be
defeated). But that was on foreign soil, and we believed that we had
pulled our punches and beaten ourselves, and so remained triumphalist
for the remainder of the twentieth century, as we had been practically
since Reconstruction.

Now we're not so sure. For the first time since the War of 1812 we have
been damaged in continental America by foreigners, having made other
people hate us, though we had never needed to pay attention to such
matters before. Proxies could fight the malcontents for us in places
like Central America, and the Japanese and Germans, would-be conquerors,
had not felt much real animus, becoming close, amicable allies after the
war. Our first World War II hero, Colin Kelly, three days after Pearl
Harbor, flew his B-17 bomber (as media myth had it) in kamikaze fashion
to hit a Japanese cruiser, before the Japanese made a practice of it. To
give your life for your country, like Nathan Hale, is an ideal that's
since evaporated.

Obese individually and as a nation, and trying to stall the aging
process, we talk instead of cars and taxes, sports and movies, cancer
and entitlements, but with a half-unmentioned inkling too of what more
ominously may be in store--a premonition that our righteous confidence
might have served us just a bit too well. We never agonized a lot about
killing off the Indians, or our slaving history either, once that was
over, or being the only nuclear power ever to incinerate multitudes of
people. We've hardly seemed to notice when free enterprise segues into
simple greed, because our religious beginnings countenanced rapacity, as
long as you tithed. Settling the seaboard in official belts of piety,
whether Puritan, Anglican, Quaker or Dutch Reformed (only the frontier
tended to be atheistic), we seized land and water with abandon, joined
by Catholics, Lutherans, Methodists and what have you, westward ho. Each
group encouraged its rich men to creep like a camel through the eye of
the needle, and political freedoms were gradually canted away from the
pure ballot box toward influence-buying.

We swallowed all of that because the New World dream envisioned
everybody working hard and getting fairly rich, except when undertows of
doubt pervaded our prosperity, as in the 1930s and 1960s; or now when,
feeling gridlocked, we wonder if we haven't gone too far and used the
whole place up. We seem to need some kind of condom invented just for
greed--a latex sac where spasms of that particular vice can be
ejaculated, captured and contained. Like lust, it's not going to go
away. Nor will Monopoly games do the trick, any more than pornographic
videos erase impulses that might result in harm. The old phrase patrons
of prostitutes used to use--"getting your ashes hauled"--said it pretty
well, and if we could persuade people to think of greed, as well, that
way and expel its destructiveness perhaps into a computer screen,
trapping the piggishness in cyberspace might save a bit of Earth. The
greediest guys would not be satisfied, but greed might be looked on as
slightly outré.

Some vertigo or "near death" experience of global warming may be
required to trip the necessary degree of alarm. The droughts and water
wars, a polar meltdown and pelagic crisis--too much saltwater and
insufficient fresh. In the meantime, dried-up high plains agriculture
and Sunbelt golf greens in the Republicans' heartlands will help because
African famines are never enough. We need a surge of altruism, artesian
decency. The oddity of greed nowadays is that it is so often solo--in
the service of one ego--not ducal or kingly, as the apparatus of an
unjust state. Overweening possession, such as McMansions and so on, will
be loony in the century we are entering upon--ecologically,
economically, morally, commonsensically. But how will we realize this,
short of disastrous procrastination? Hurricanes and centrifugal violence
on the home front, not to mention angry Arabs flying into the World
Trade Center? That astounded us: both the anger and the technological
savvy. These camel-herding primitives whom we had manipulated, fleeced,
romanticized and patronized for generations, while pumping out their oil
and bottling them up in monarchies and emirates that we cultivated and
maintained, while jeering at them with casual racism in the meantime,
when we thought of it, for not having democracies like ours. To discover
that satellite TV, the Internet and some subversive preaching should
suddenly provide them access to divergent opinions disconcerts if it
doesn't frighten us, as does their willingness to counterpose
rudimentary suicide missions to the helicopter gunships and F-16s we
provide the Israelis. "Don't they value life?"

They won't be the last. The Vietcong were as culturally different from
the Palestinians as we are and yet succeeded in winning a country for
themselves, at a tremendous but bearable cost, which the Palestinians
will also undoubtedly do. Self-sacrifice can be a match for weaponry,
not because the Americans or Israelis value Asian or Arab life--at key
junctures and for essentially racist reasons they have not--but because
of the value they place on their own citizenry. As many as fifty
Vietnamese lives were lost for every American's, but that was not a high
enough ratio for us, even though, unlike some Israelis, we don't ascribe
to ourselves a biblical imprimatur. So we let them have their land, and
the domino calamities that had been famously predicted did not result.

To equate our own revolution with anybody else's is quite offensive to
us. Mostly, in fact, we prefer to forget that we had a revolutionary
past and kicked thousands of wealthy Tories into Canada, seizing their
property. We were slow to condemn apartheid in South Africa, having
scarcely finished abolishing our own at the time, and have been slow in
general to support self-governance in the warmer climates or to
acknowledge suffering among people whose skins are beiger than ours. And
if our income per capita is sixty or eighty times theirs, that doesn't
strike us as strange. We are a bootstrap country, after all. They should
pay us heed. And the whole United Nations is "a cesspool," according to
a recent New York City mayor.

But primitive notions like those of Ed Koch invite a primitive response.
And box-cutters in the hands of Taliban fundamentalists are not our main
problem. We have gratuitously destroyed so much of nature that the
Taliban's smashing up of Buddhist statues, as comparative vandalism,
will someday seem quite minuscule. We have also denatured our own
nominal religions: that is, taken the bite of authenticity out of
Christianity, for instance. Our real problem, I think, is a centrifugal
disorientation and disbelief. There is a cost to cynicism (as in our
previous activities in Afghanistan), and the systematic demonizing of
communitarianism during the cold war made it harder afterward for us to
reject as perverse the double-talking profiteering implicit in phenomena
like Enron, when we had thought that anything was better than collective
regulation and planning.

But ceasing to believe in revolutionary democracy--whether of the
secular or Christian (or Emersonian) variety--has proven costly. A
decent regard for the welfare of other people, in international as well
as local life, is going to be more than just a matter of private virtue.
In a shrinking world it may be a survival tool. Fanaticism doesn't carry
as far unless catastrophic economic conditions lurk in the background,
as we learned in the case of Germany between the two world wars but
then, when non-Caucasians were involved, forgot. Our foreign aid budget,
once the cold war ended, collapsed into spectacular stinginess, and our
sole response to September 11 has been police work. This can probably
erase Al Qaeda--which became after its instant victory that one morning
quite superfluous anyway--but not the knowledge of our vulnerability to
any handful of smart and angry plotters in this technological age. We
might see an explosion of those.

Our national self-absorption (in which the focus seems more on trying to
stay young than helping the young) may give capitalism a bad name.
Simple hedonism and materialism was not the point of crossing the ocean.
Our revolution was better than that. It was to paint the world anew.

In Steven Spielberg's latest picture, a skinheaded psychic named Agatha
keeps challenging Tom Cruise with the words, "Can you see?" The
question answers itself: Cruise sees in Minority Report, but not
well enough. He must learn to recognize his ocular limitations--a task
he accomplishes by enduring chase scenes, double-crosses, confrontations at gunpoint and a few jocularly
nauseating trials, conducted in Spielberg's bucket-of-bugs, Indiana
Jones
style.

In Jacques Audiard's new picture, by contrast, Emmanuelle Devos can't
hear, and she knows it from the start. The first shot in Read My
Lips
is an image of her tucking a hearing aid behind one ear, then
concealing it with her hair. Her first lines, spoken while answering the
phone in a nerve-jangling office, include the words, "I didn't hear. Can
you repeat that?" Her task in the movie--accomplished through acts of
larceny and hostage-taking--is to learn how much power she might have,
despite her aural limitations.

Ineluctable modalities of the filmable! We are discussing not only sight
and sound but also America and France, plot and character, man and
woman, innocence and experience. Film culture needs both sides; so if I
tell you that I'd gladly watch Read My Lips several times but
will be content with one viewing of Minority Report, please don't
take it to mean that Minority Report shouldn't be seen at all. On
the contrary: To miss it would be like bypassing one of those grand and
macabre curiosities that lie just off the tourist's route--like visiting
Madrid, for example, without troubling to descend the marbled stair to
the crypt of the Escorial. In the monumental edifice of Minority
Report
, as in that palatial tomb, you may encounter something madly
idiosyncratic, yet absolutely characteristic of its culture. It's just
not much of a pleasure; whereas Read My Lips is so much fun, it
could be retitled Curl My Toes.

But, to begin with Spielberg:

After last summer's release of A.I.: Artificial Intelligence, all
true filmoids were eager to know what nightmare he might next sweat out
in public. Under the influence of Stanley Kubrick, under the pretense of
selling us entertainment, Spielberg had made a nakedly confessional
movie about abandonment, disillusionment and the corruptions of show
business. Past a certain point, of course, the picture was a misshapen
wreck; but that was because A.I. struggled so desperately to
escape itself and concoct a happy ending. The harder it strained, the
more compelling, and horrifying, it became. I felt that Spielberg had at
last tapped into emotions he'd located not in his audience but in
himself. Could he maintain that connection, now that he'd established
it? That was the question hanging over Minority Report.

The answer is now before us, in the only futuristic, metaphysical
thriller I can think of that takes the violation of civil liberties as
its theme and the abuse of children as its obsession. These twin facets
of Minority Report come together, improbably but unforgettably,
in the figures of oracles known as Pre-Cogs. They lie in a bottom-lit,
Y-shaped pool somewhere in Washington, DC, in the year 2054: three
damaged orphans who are adult in form but fetal in situation, since they
are kept floating in an amniotic fluid of high narcotic content. Their
fate (you can't really call it a job) is to remain forever in that stage
of childhood where every shadow in the bedroom conceals a monster.
Unfortunately, the monsters are real: They are the murderers who will
strike in the near future, and whose crimes the psychics not only
foresee but experience. You might think someone would take pity on the
Pre-Cogs and release them from these visions, at which they convulse in
pain and horror. Instead, for the public benefit, a police agency called
the Department of Pre-Crime maintains these creatures in a permanent
state of terror.

We come to the theme of civil liberties, which must have required some
precognition on Spielberg's part, since Minority Report went into
production well before John Ashcroft declared due process to be an
unaffordable luxury. It is the movie's conceit (borrowed from the
writings of Philip K. Dick) that the police may someday arrest people
pre-emptively, for crimes they would have committed had they been left
on the loose. As chief of the Pre-Crime unit, Tom Cruise sees no problem
with this practice, either legally or philosophically--which is why he
is half-blind. He doesn't yet understand that the rights he takes away
from others may also be taken from him.

But I'm making it sound as if Minority Report constructs an
argument, when it actually contrives a delirium. A sane movie would have
been content to give Cruise a reason for arresting pre-criminals. For
example, he could have been blinded by the pain of losing a son. That,
in fact, is how the plot accounts for Cruise's keen efficiency; but it
isn't enough of an explanation for Spielberg, who goes on to embed a
second rationale in the mise en scène. Every setting, prop
and gesture shows us that Cruise does this job because it excites him.

He's in his brush-cut mode in Minority Report. He rockets around
Washington, rappels onto the pre-crime scene, dives at the last second
between the would-be killer and the not-quite-victim--and that's just
the conventional part of his work. The real thrill comes from
interpreting the Pre-Cogs' visions, which he does in front of a
wraparound computer screen while a stereo pipes in the Unfinished
Symphony
. Waving his hands against the music's rhythm, making
digital images slide around at will, he looks like a cross between an
orchestra conductor and a film editor, working at some Avid console of
the future.

So childhood pain in Minority Report bleeds into fear of crime,
which blossoms into a fantasy of omnipotence--and this fantasy in turn
sows further pain, in the form of little stabs to the eye. In the year
2054, government bureaus and advertising agencies alike scan your retina
wherever you go, blinding you with lasers a hundred times a day to track
your whereabouts, your spending, your preferences in clothing from the
Gap. What does it matter if Cruise comes to see the dangerous fallacy of
pre-crime? Human freedom has already vanished from his world, in the
blink of an eye.

I hope it's clear from this summary that Minority Report not only
represents another of Spielberg's Major Statements but also continues
his risky new practice of self-expression--risky because his feelings
remain unresolved, and also because he allows them to be Major. A
solemnity pervades the movie, making itself felt most tellingly at
moments of incidental humor. Spielberg has never been a rollicking
filmmaker--the human activities that least interest him are laughter and
sex--but in the past he's known how to raise a chuckle, and he's known
when to do it. In Minority Report, though, clumsy throwaway gags
keep interrupting the action, as if Spielberg had lost his sense of how
to play with the audience. Slapstick assaults upon a family at the
dinner table, or Olympian sneers at bickering couples, do nothing to
leaven Minority Report. The movie's ponderousness is relieved
only by Samantha Morton's uncanny portrayal of the psychic Agatha and by
Lois Smith's turn as Dr. Hineman, the researcher who ought to have
healed the Pre-Cogs but instead turned them into tools of the police.
When Cruise goes to visit Smith at her greenhouse hideaway, the colors
of Brutalist architecture briefly give way to those of nature, and the
pace of the acting triples. Speaking her lines over and around Cruise,
Smith plays her role in the manner of Vladimir Horowitz dashing off an
étude.

"Who is the strongest Pre-Cog?" Cruise wants to know. Smith smiles
indulgently at the blind man. "Why, the woman, dear." This claim of
female superiority has the charm of gallantry; it's Spielberg's gift to
the actress. But as it's developed in the rest of the movie, the notion
(like far too much of Minority Report) lacks the flourish that
gallantry requires. I offer sincere congratulations to Spielberg for at
least two-thirds of this picture; but now I think it's time to leave
Minority Report and consider a movie about a real woman.

Her name is Carla. She works for a real estate development company,
where she's treated like part of the office equipment. As embodied by
Emmanuelle Devos, Carla has an apology for a hairdo and a choked-off
complaint for a lower lip. When she's casually insulted--her paperwork
ruined by the spill from a coffee cup, her skirt stained suggestively
under the rump--Carla falls apart so completely that her boss offers to
let her hire an assistant. "Trainees are cheap," he explains, as if that
would make her feel better. She hires one anyway and comes up with the
man of her dreams: Paul (Vincent Cassel), a greasy, long-haired,
leather-jacketed, muttering ex-con, who assures her (while his eyes scan
for the exit) that sure, he's worked with, uhm, spreadsheets. Plenty of
them.

One of the pleasures of Read My Lips--a pleasure that isn't
available in Minority Report--is the way the movie invites you to
see into these characters, who always amount to more than their
functions in the plot. Early on, for example, when Carla and Paul are
just getting to know each other, you see how they might be bound by a
common lack of decorum. "What were you in jail for?" Carla asks bluntly,
violating rule number one for dealing with ex-cons. Paul answers her,
then asks in turn, "So you're deaf? I mean, really deaf? Like, you can't
hear?" Although she tells him to shut up, Carla doesn't hesitate to play
along when he asks her to read someone's lips. He likes her willingness
to trespass on others. She likes the muscle he provides.

Although Carla's alliance with Paul develops uneasily, it's not without
humor. (No false notes here; Audiard always gets the tone right.) But
even though the bumps and jolts of the plot are intriguing--and far more
numerous than those in Minority Report--what's perhaps most
engaging in Read My Lips is the evocation of Carla's reality. The
images are often incomplete, oddly framed, out of focus, unsteady,
surprisingly closeup, bathed in shadow, richly colored, dreamily slow.
This is the subjective vision of human eyes, not the objective gaze of
the camera--and Carla sees it all the more vividly because the world of
sound is closed.

I like the sensuousness of Read My Lips and the nuance of its
portrait of a woman. I like the sense of possibility in the characters,
the interplay between Devos and Cassel, the mundane realism of the plot
(which asks you to believe only that the real estate business isn't
entirely clean, and that large sums of cash sometimes flow through
bars). I even like the happy ending. Although Spielberg's picture is the
one titled Minority Report--an ironic name for a Tom Cruise
blockbuster, as its maker surely knows--Read My Lips files the
story that's too infrequently heard.

Much as I hate to, I'm going to start by talking about the damn money.
I'm only doing it because almost everyone else is.

It's not just the author profiles and publishing-trade columns, but
seemingly every other review of The Emperor of Ocean Park that
mentions, way before stuff like plot or characters, the $4.2 million
Knopf paid Yale Law professor Stephen L. Carter for this first novel and
another to come. Most, if not all, of these pieces seem incredulous that
an academic-of-color could reap the kind of dough-re-mi for thriller
writing that the John Grishams and Tom Clancys could command. Pundits of
both colors--or of what Carter's novel continually refers to as "the
darker nation" and "the paler nation"--sound pleasantly surprised that
an African-American male could earn some pop-cultural buzz by being paid
millions of dollars for doing something that doesn't require a ball or a
microphone.

I'm guessing Carter has the grace to be appreciative about all this. But
I'm also guessing that the author of Reflections of an Affirmative
Action Baby
is equipped with inner radar delicate enough to pick up
faint signals of condescension (or worse) beneath all this hype. Sifting
through the reviews so far, especially those taking Carter to the
woodshed, one detects glimmers of doubt as to whether the book or the
author deserves all that money and attention. No matter that Carter,
Yale Law's first tenured African-American professor, has established his
credentials as a legal scholar and public intellectual, having published
seven nonfiction books whose subjects include values (Integrity,
Civility), faith in public life (The Culture of Disbelief, God's Name in Vain) and, of course, race
(Reflections...). Black people have been through enough job
interviews to recognize the skeptically arched eyebrows in key precincts
of Book-Chat Nation over Carter's big score. The eyebrows ask: Is the
book worth all this fuss--and all that damn money?

The short answer is yes, though we'll get to the longer, more
complicated answer in a few clicks. First I want to address the other
recurring motif in the reviews so far: a belief that the novel's primary
value--if not the only legitimate reason for all that money--comes in
the way it foregrounds privileged reaches of African-American society.
As if Dorothy West, John A. Williams, Nella Larsen, George S. Schuyler,
John Oliver Killens, Charles W. Chesnutt, Lawrence Otis Graham and E.
Franklin Frazier, the Veblen-esque sociologist-satirist who wrote
Black Bourgeoisie
, had never been born, much less ever bothered
writing books. To these weary eyes, such incredulity over class issues
reflects nothing more than the same-as-it-ever-was manner in which
novels by African-Americans are waved toward the sociocultural
checkpoint before they can compete for artistic consideration. And since
it's being marketed as a legal thriller/whodunit, The Emperor of
Ocean Park
has the added burden of being stigmatized as a genre
piece. Hence the carping in some reviews over Emperor, whose
closing kickers spring merrily like tripwires.

Hello. It's melodrama. There are a lot of smart people who agree
with Raymond Chandler, who confessed to a friend in 1945 that he chose
to write melodrama "because when I looked around me it was the only kind
of writing that was relatively honest." Also as Chandler and other smart
people drawn to genre have repeatedly proved, it's possible to hang
lyricism, social observation, even political ideas on melodrama's broad
shoulders so long as you don't forget to play by the rules of the genre.
One more thing: Melodrama, when played at top speed, often can be
transformed into something very close to satire or, at least,
sophisticated farce.

The Emperor of Ocean Park doesn't move quite fast enough for
that, which may be its biggest problem. Still, it is sophisticated
entertainment; witty, elegantly written (way better than Grisham or
Clancy, OK?), conceptually outrageous in a genteel way and flush with
conflicting ideas unleashed in the stick-and-move fashion of a
freewheeling sparring match. The surprise isn't that Carter can write
fiction. It's his showmanship in mixing up the car chases, chess
strategies, red herrings and gun battles with such dark, rueful
observations as this:

I suddenly understand the passion of the many black nationalists of the
sixties who opposed affirmative action, warning that it would strip the
community of the best among its potential leaders, sending them off to
the most prestigious colleges, and turning them into... well, into young
corporate apparatchiks in Brooks Brothers suits, desperate for the favor
of powerful white capitalists.... And the nationalists were right. I am
the few. My wife is the few. My sister is the few. My students are the
few. These kids pressing business cards on my brother-in-law are the
few. And the world is such a bright, angry red.... I stand very still,
letting the redness wash over me, wallowing in it the way a man who has
nearly died of thirst might wallow in the shower, absorbing it through
every pore, feeling the very cells of my body swell with it, and sensing
a near-electric charge in the air, a portent, a symbol of a coming
storm, and reliving and reviling in this frozen, furious instant every
apple I have ever polished for everybody white who could help me get
ahead.

This passionately skeptical, somewhat self-loathing voice belongs to
Talcott Garland, who also answers to the names "Tal" and "Misha." (This
multiplicity of names is one of the little jokes that Carter threatens
to run into the ground.) A law professor at an unnamed Ivy League
university, Talcott is one of four children of Oliver Garland, a
conservative judge appointed by Nixon to the US Court of Appeals, who
might have served on the Supreme Court if his nomination hadn't been
derailed because he was seen hanging around the federal courthouse with
a college roommate named Jack Ziegler, a former CIA agent and a sinister
presence skulking in the dark alleys of American power.

As the novel begins, Tal's father, whom Time once dubbed the
"emperor of Ocean Park" because of his family's impressive digs in the
Oak Bluffs section of Martha's Vineyard, has been found dead in his
study. Tal is, at best, indulgent to older sister Mariah's suspicions
that their father met with foul play. Still, Tal suspects something's
afoot when, at the judge's funeral, Ziegler pulls him aside to ask the
whereabouts of some "arrangements" that the judge stashed away
somewhere. Knowing "Uncle Jack" all too well, Tal suspects that these
"arrangements" don't exactly fall into customary categories of
post-mortem details. By the time bogus FBI agents try to scare him into
telling what little he knows and the Episcopal priest who conducts the
funeral is tortured and murdered, Tal's paranoia has kicked into third
gear.

All of which Tal needs like root canal. Things are rough at the law
school with various and sundry colleagues intruding their personal
dramas onto his own. One of them, it turns out, is in competition with
Tal's stunning wife, Kimmer, short for "Kimberly," for potential
appointment to a federal judgeship. Kimmer frets and fusses about the
appointment, oblivious to her husband's concerns for their safety from
whatever or whoever is stalking them. She barely notices the shadow
stalkers, traveling long distances from home to make rain for her
high-toned law firm. Tal suspects Kimmer is having an affair, but can
barely keep her close by long enough to probe for concrete evidence. He
concedes being flummoxed in general by the nature of women, seeking
respite from such mysteries in "the simple rejuvenating pleasure of
chess." Indeed, the conundrums of chess, a game where, as in life, white
always gets the first move over black, play a metaphoric role in the
mystery, complete with missing pawns from the judge's own set and a
strategic gambit labeled "Excelsior."

A few words about Tal: He's the hero of the story, but he's not an easy
man to admire. Readers so far think he's at best an unjustly beleaguered
nerd or at worst an embittered brat, as self-absorbed as the mercenary
students, career women and secular humanists he slaps with his words. He
behaves badly at times, never more so than in a memorably chilling set
piece in which he bullies and humiliates one of his students, "an
unfortunate young man whose sin is to inform us all that the cases I
expect my students to master are irrelevant, because the rich guys
always win.... His elbow is on the chair, his other fist is tucked under
his chin, and I read in his posture insolence, challenge, perhaps even
the unsubtle racism of the supposedly liberal white student who cannot
quite bring himself to believe that his black professor could know more
than he.... I catch myself thinking, I could break him." And he
does, adding to the rapidly expanding ledger tabulating his
self-disgust.

On the other hand, he loves his young son Bentley in a way that
frightens him, especially when he visualizes a future in which Kimmer
drifts out of his life with son in tow. He volunteers in a soup kitchen,
partly as penance for his transgressions, partly to turn down the noises
his own inner radar makes and submit to Christian values. He also yearns
for a grounded sense of family, though relations with his aforementioned
sister are strained and his brother Addison--the one Tal believes Dad
liked best--is a commitment-phobic radio personality who keeps slipping
from sight to avoid close scrutiny. (He has his reasons.) And there was
a younger sister, Abby, something of a family renegade, who died in a
car accident. "When Abby died," Tal recalls, "my father went a little
nuts, and then he got better." It's the book's most pithy line. Don't,
for a minute, forget it.

Carter is very good at evoking the wonderlands of American life, whether
the Vineyard, Aspen or Washington's "Gold Coast" enclave of wealthy,
powerful African-Americans. He's even better at describing the
machinations and intrigue in law school faculty offices--which shouldn't
be a surprise, though Carter's extended disclaimer (pages 655-57) begs
readers not to confuse Tal's spiky, tempestuous professional life with
his own. Still, from what readers know of Carter's ideas about religion,
ethics, politics and manners, it's not too much of a stretch to see Tal
asserting his creator's right to probe, confound and, whenever possible,
shatter conventional ideological boundaries.

At one point, Tal has a reverie about one of his father's standard
speeches to white conservatives, pointing to the overlap of their
opinions on such issues as school vouchers, abortion and gay rights with
those of the African-American mainstream. "Conservatives are the last
people who can afford to be racist. Because the future of conservatism
is black America!
" Quickly, Tal's mind makes a countermove. "Because
there were a few little details the Judge always left out. Like the fact
that it was conservatives who fought against just about every civil
rights law ever proposed. Like the fact that many of the wealthy men who
paid for his expensive speeches would not have him in their clubs....
The Judge was surely right to insist that the time has come for black
Americans to stop trusting white liberals, who are far more comfortable
telling us what we need than asking us what we want, but he never did
come up with a particularly persuasive reason for us to start trusting
white conservatives instead."

For fans of the well-made thriller, these and other digressions may seem
like patches of glue. But for those who think the plot is, as with the
rest of the book, somewhat overstuffed with data, false leads, sudden
frowns and black-and-blue contrivances, Tal's asides come across like
flares of random, cheeky insight. As the quote above suggests, neither
left nor right is spared Tal's withering assessment, though if I were
keeping score, the liberal humanists get it in the teeth far more than
those with more spirit-based devotions explaining their identities.

Readers have become accustomed to books written by African-Americans to
come down hard on a sociopolitical point. Mystery lovers want airtight
solutions. The Emperor of Ocean Park fulfills neither
expectation. And that, as much as anything, earns both its money and its
respect. Novels of ideas, in whose company Emperor surely belongs
if I read my Mary McCarthy right, are supposed to be exactly that: About
many ideas and not just one. Someone, maybe the author of Anna
Karenina
, once suggested that fiction should rouse questions, not
answer them. Once again, the defense calls Raymond Chandler to the
stand: "It is no easy trick to keep your characters and your story
operating on a level which is understandable to the semi-literate public
and at the same time give them some intellectual and artistic overtones,
which the public does not seek or demand or in effect recognize, but
which somehow subconsciously it accepts and likes."

The Emperor of Ocean Park is no Farewell, My Lovely. But
Carter is on to something. And he may someday deliver what Chandler
does, along with a hearty serving of something non-Chandler-esque. What
that something may be is hinted in a few lines close to the novel's very
end:

"That truth, even moral truth, exists I have no doubt, for I am no
relativist; but we weak, fallen humans will never perceive it except
imperfectly, a faintly glowing presence toward which we creep through
the mists of reason, tradition, and faith."

Your move, Tom Clancy.

Even as campaign finance reformers celebrated the long-awaited passage
of the McCain-Feingold bill this spring, they cautioned the public not
to assume the fight for reform was over. "This bill will only thwart the
special interests for so long," Senator McCain himself predicted.
"Twenty years from now, they will have figured out other ways to get around it, and another
couple of senators will be fighting to break the endless cycle of
corruption and reform." While McCain-Feingold is a significant
legislative accomplishment that will help to plug the gaping soft-money
hole in the existing system, these reformers explained, there are still
gaps through which private money can exert undue political
influence--and the fight to close them is just beginning.

This is the way campaign finance reform has worked since the first piece
of remedial legislation was passed in 1907--a cycle of public outrage,
stopgap legislation and new forms of abuse, prompting further outrage.
Lasting solutions have so far proven elusive--in large part because of
the Supreme Court's 1976 Buckley v. Valeo ruling that campaign
spending limits are unconstitutional. So reformers are stuck fighting
with more or less the same tools they've always used: contribution
limits, voluntary spending limits, public financing and full disclosure
of funding sources.

The limited effectiveness of these tools has prompted two Yale Law
professors, Bruce Ackerman and Ian Ayres, to offer a radical rethinking
of the problem. Ackerman--who last attracted public notice with his book
The Stakeholder Society, in which he proposed to eliminate
chronic economic injustice by giving every young American adult a stake
of $80,000, financed by an annual wealth tax--and Ayers clearly have no
qualms about tackling big problems. Their new book, Voting With
Dollars
, starts with a simple and seductive question: If the old
reform tools aren't working, why not try new ones? Rather than imposing
increasingly complicated contribution and spending limits, they suggest
removing them. Rather than relying on bureaucracies to distribute public
funds to candidates, they say, let the voters do it directly. And rather
than mandating complete disclosure of politicians' funding sources, they
propose keeping such information completely secret--especially from the
politicians themselves.

At the core of Ackerman and Ayres's proposal is what they call the
"secret donation booth." Like votes, the authors argue, campaign
contributions should be made anonymously. That way, private interests
could not influence elected officials with their money, because there
would be no way for a contributor to prove that he had given money to a
candidate. (So as not to discourage citizens of average means from
donating modest amounts by denying them the ability to take credit for
their gifts, Ackerman and Ayres permit the government to confirm that a
donor has given up to $200.)

Just as the introduction of the secret ballot in the late nineteenth
century put an end to the then-common practice of vote-buying, the
authors assert that the implementation of a secret donation booth (in
actuality, a blind trust administered by the FEC) would eliminate
influence-buying. Sure, John Richman might claim he's given a million to
Jane Candidate, but such unverifiable talk is cheap, and politicians
will attach to such assurances the same minimal weight they attach to
promised votes. Once that avenue of political influence is closed off,
Ackerman and Ayres reason, donors interested solely in the corrupting
power of their contributions will have no reason to pour their money
into politics, and private giving will be left to those few donors
motivated by pure political ideology.

To make up for the funds that would be lost once this private money
leaves the system, Ackerman and Ayres propose that the government give
every registered voter fifty "Patriot dollars"--money they'd be able to
put toward whichever federal candidate, national political party or
interest group they wanted, simply by going to their local ATM. Based on
voter participation numbers from the 2000 election, Ackerman and Ayres
calculate that the Patriot system would infuse $5 billion into a federal
election cycle, dwarfing the $3 billion that was spent in 2000. Thus,
they reason, Patriot dollars would not only insure that viable
candidates had enough money to fund their campaigns but would also make
them dependent on funds from, and thus more responsive to, the
electorate as a whole.

So far, so good. And there's more: In addition to the secret donation
booth and the Patriot system, "voting with dollars" would produce two
compelling side effects.

First, by giving each registered voter fifty Patriot dollars to spend on
the election, citizens would be encouraged to inform themselves earlier
and more thoroughly about issues and candidates, so as to make the best
use of their allocation. This heightened civic engagement would likely
translate into higher voter turnout and a consistently better-informed
electorate--what Ackerman and Ayres call "the citizenship effect."
Second, by avoiding all spending limits, a common plank of more
traditional reform platforms, "voting with dollars" would not run afoul
of the Supreme Court, which famously ruled in Buckley that "the
concept that government may restrict the speech of some elements of our
society in order to enhance the relative voice of others is wholly
foreign to the First Amendment."

In theory, then, "voting with dollars" has lots of appeal. It's a fresh
approach to an old problem; it promises to reinvigorate a tired
electorate; and it's Supreme Court-proof. Not satisfied with a theoretical discussion of their proposal,
however, Ackerman and Ayres devote the bulk of their book to describing
what their reform would look like in practice. And this is where they
run into trouble.

To be sure, many of their implementation mechanisms are impressively
well-researched and carefully crafted, and at first they make it seem as
if "voting with dollars" just might work. To prevent a donor from
getting around the anonymity of the donation booth with an unusually
large contribution, for example, the authors propose to enter large
contributions into a candidate's account in random amounts at random
intervals, according to a special "secrecy algorithm." That way, the
donor couldn't simply tell the candidate to expect his account to
increase by a certain amount on a certain date, and then claim the
credit. (Ackerman and Ayres would bar what they call "stratospheric"
contributions to eliminate amounts too large to be hidden even by their
secrecy algorithm.) A donor would also be unable to prove he'd
contributed by flashing around a canceled check made out to the blind
trust, since all contributions would be revocable for a five-day period,
giving the donor no way to prove he didn't simply ask for his money back
the next day.

In spite of these intricate measures, however, there are a few reasons
Ackerman and Ayres's implementation scheme is fatally flawed. First, no
matter how refined your secret donation booth is, candidates will always
be able to figure out where their money's coming from. For proof of
this, one need look no further than our current voting system. Even with
the secret voting booth, candidates use polling, voter registration
rolls, demographic data and a host of other increasingly sophisticated
tools to figure out with eerie precision who's going to support them,
and they target their campaigns accordingly. Similarly, while a secret
donation booth would prevent politicians from knowing precisely how much
money each individual or privately funded PAC is giving, candidates
would still have a pretty good idea of who their big donors were likely
to be, and they'd still grant those likely donors uncommon access--a
politician's most valuable resource. Even if the secret donation booth
had been in place during the 2000 election, for instance, George Bush
would still have asked Ken Lay for fundraising help (by, say, organizing
a fundraising dinner--a permissible activity under Ackerman and Ayres's
paradigm, as long as there's not a per-plate charge). And Ken Lay would
still have been invited to meet with Dick Cheney's energy task force
once the pair was in the White House.

Then there's the problem of independent expenditures. Ackerman and Ayres
are sharply critical of McCain-Feingold's attempts to rein in such
electioneering, calling the act's restrictions on such spending in the
months leading up to an election an "important weakness" that "restrains
free speech." (Because of arguments like this, these restrictions are
widely held to be the most vulnerable part of McCain-Feingold. In June
the FEC barely rejected a proposal that would have significantly
weakened the act's independent expenditure restrictions, and upcoming
court challenges target these restrictions as well.)

And yet, independent expenditures are a significant obstacle to any
attempt to reduce private money's role in politics, as they allow any
individual or interest group with money the chance to make an end run
around the regulated campaign finance system. Conventional attempts to
curtail these expenditures may not solve the whole problem, limited as
they are by the First Amendment, but they're better than the unregulated
alternative that Ackerman and Ayres propose. In their reform scenario,
independent "issue" campaigns that do not explicitly endorse a
candidate--according to the Court's limited definition of express
advocacy, which focuses on certain "magic words" such as "elect" and
"vote for"--would be unregulated. In other words, organizations would be
free to fund "issue" ads whose timing and content are obviously intended
to help a particular candidate, as well as to publish the identities of
their contributors and the magnitude of their support, as long as those
ads didn't explicitly tell you how to vote. One can only imagine that in
the anonymous "voting with dollars" world, this opportunity to claim
credit for expenditures clearly designed to help a particular candidate
would be all the more alluring. And yet the only remedy Ackerman and
Ayres offer is their statute's "swamping control," which would increase
Patriot allotments in the next election cycle whenever private spending
skewed the national Patriot/private ratio below 2 to 1. Other than this
after-the-fact correction, Ackerman and Ayres offer no barriers to
prevent private money from flowing to such unregulated channels.

In the end, Ackerman and Ayres's paradigm is handicapped by its
Court-centered approach. The authors chide traditional reformers for
painting Buckley v. Valeo as the primary roadblock to reform,
saying that such a view is both counterproductive, because the Court is
unlikely to reverse Buckley anytime soon, and wrong, because
Buckley upholds such fundamental constitutional principles as
free speech. By embracing Buckley, they argue, their approach is
more pragmatic and more principled. And yet, its legal pragmatism
notwithstanding, "voting with dollars" does not confront the central
injustice of the current system: the exorbitant influence of big money.
In focusing solely on ending the potential for quid pro quo
corruption--the one aspect of campaign finance that the Court has
consistently shown itself eager to regulate--Ackerman and Ayres downplay
the degree to which private money controls politics even without such
blatant dealmaking. The truth is, as long as politicians are dependent
on private money to finance their campaigns, monied interests will play
a disproportionately large role in setting the political agenda. This is
why traditional reformers have chafed at Buckley's narrow
definition of corruption, and it's why they continue to advocate
solutions that use a combination of disclosure laws, limits and public
financing. At its core, the campaign finance reform movement is about
more than simply putting an end to under-the-table deals between wealthy
individuals and unscrupulous politicians. It's about opening up the
electoral system, so that people without networks of wealthy friends
will be able to wage viable campaigns for public office, and won't be
beholden to private interests once they get there. While Patriot dollars
are a good step in this direction, they don't go far enough. Without
contribution and spending limits, the public financing offered by
Patriot dollars would quickly be drowned out by the torrents of private
money flowing into the system.

For all its shortcomings, Voting With Dollars deserves credit for
pushing reformers to rethink some of their cherished assumptions about
what works, and what's desirable. However, by refusing to consider more
standard approaches to reform like disclosure, contribution limits and,
in particular, voluntary public financing systems like those currently
in place in Maine and Arizona, Ackerman and Ayres have boxed themselves
into an unworkable system. Ultimately, Voting With Dollars'
radical approach to campaign finance reform would expand big money's
role in politics, rather than insulate democracy from it.

Historians have made much of the ways that the social protest movements of the 1960s unsettled the morals of the dominant culture, but it is often forgotten that activists themselves were sometimes jarred by the new sensibilities as well.

It's easy to rephrase Tolstoy's opening to Anna Karenina so it
describes junkies, who all share an essential plot line: Who and how to
hustle in order to score. But in the world of postwar jazz, Charlie
Parker gave junk an unprecedented clout and artistic aura. Bebop, the
convoluted, frenetic modern jazz he and Dizzy Gillespie, among others, formulated, demanded intense powers of
concentration. Bird played so far out of nearly everyone else's league
that his heroin habit seemed to explain his godlike prowess. So heroin
became an existentialist response to racism, to artistic rejection, a
self-destructive way of saying Fuck You to mainstream America's 1950s
mythologies. Parker warned everyone from young Miles Davis to young Chet
Baker away from smack, but few heeded him. In 1954 Davis weaned himself
from a four-year addiction; in 1988 Baker died after decades of living
in Europe as a junkie, found in the street below his Amsterdam hotel
window. (Did he jump? Was he pushed?)

Oklahoma-born and California-bred, Baker had one amazing artistic gift:
He could apparently hear nearly any piece of music once and then play
it. He intuitively spun melodies on his trumpet with a tone critics
compared to Bix Beiderbecke's, and spent his long and unbelievably
uneven career relying on that gift and coasting on his remarkable early
breaks. In 1952 Charlie Parker played with him in LA, giving him instant
cachet. When Gerry Mulligan hired him for the famed pianoless quartet
that is the quintessential white West Coast Cool band, it made him a
jazz star. After a drug bust broke the group up, Baker began singing;
his wispy balladeering and Middle American good looks gave him entree to
a broader public. During his early 1950s stint with Mulligan, he
unbelievably beat Louis Armstrong and Dizzy Gillespie to win critics'
and fans' polls; his first album as a vocalist, which featured "My Funny
Valentine," got him lionized on the Today and Tonight
shows and in Time. From there on, his life took on a downward
bias within a junkie's relentless cycles.

Deep in a Dream: The Long Night of Chet Baker aims to synthesize
all the information about the trumpeter and try to interpret him within
the broader contexts of popular culture. Author James Gavin had access
to unpublished autobiographical notes and interviews with Baker's
erstwhile memoir collaborator Lisa Galt Bond, and also draws extensively
on books like Jeroen de Valk's Chet Baker: His Life and Music and
Chet Baker in Italy; he apparently scoured archives for
interviews, profiles, pictures and video and audio materials as well,
stirring in dollops from Bruce Weber's overripe 1989 movie about Baker,
Let's Get Lost. Gavin has tracked Baker across Europe and
America, distilled the wildly divergent attitudes toward him and his
work, and attempts to make a case for what endures while not flinching
from calling clunkers. He confronts black jazzers' resentment of Baker's
playing: Most heard him, with excellent reason, as a paler, milder Miles
Davis, yet he won polls and looked like he was making big money. As
Gavin points out, Baker's lilting lyricism and even his demeanor owed
almost everything to Davis's, but Baker wasn't raking in sales like Dave
Brubeck, though he was churning out streams of highly variable product.
In fact, Gavin explains the popularity of the sappy Chet Baker With
Strings
album, the trumpeter's bestselling 1954 disc (which sold an
uncharacteristic 35,000-40,000 copies the first year), by comparing it to popular contemporary mood music--an apt and telling linkage.

Gavin's discussion of that record strikes one of his leitmotifs, Baker's
charismatic visual appeal:

William Claxton's cover photo was so dreamy that record shops all over
the country put the LP on display. Claxton showed Baker at his peak of
beauty, staring out wistfully at the session, cheek resting against his
horn's mouthpiece.... Many of the buyers were young women with little
interest in jazz, who bought the LP for its cover. They were surprised
to hear music as pretty as Baker was.... It was his looks, more than his
music, that the Hollywood crowd cared about.

He's shrewd about Baker's singing:

[Record producer Dick] Bock listened in alarm as [Baker] struggled to
sing on key, pushing the session into overtime.... Baker's dogged
persistence didn't impress the musicians, who were reduced to
near-invisible accompanists, tiptoeing behind his fragile efforts....
But as people stared at the cover and listened to Baker's blank slate of
a voice, they projected all kinds of fantasies onto him.... Baker became
the first jazz musician to attract a strong homosexual following.

Gavin is quite good at debunking longstanding myths about Baker, many of
which Baker started himself. He didn't beat out loads of trumpeters to
play with Bird in LA; a studio pianist, not Parker, hired him. It's
highly unlikely Bird told East Coasters like Davis and Dizzy Gillespie
that "there's a little white cat on the coast who's gonna eat you up."
For Gavin, this self-mythologizing is a key to Baker's recessive, almost
invisible character: "Just as he discovered how to seduce the camera
lens into depicting him in make-believe terms, he learned to glamorize
the truth into a fairy tale of romantic intrigue."

Naturally, the biographer seeks the man behind the layered tales. Here
Gavin circles a black hole, because Baker was, as one witness after
another testifies, nearly completely unrevealing. He didn't read, or
speak, or otherwise express: He was "cool." Longtime junkiedom only
hardened this character trait into manipulative blankness. So Gavin
looks at Baker's doting, pushy mother and his violent failure of a
father, checks out Baker's high school beatings for being a pretty boy,
intimates that Baker's brief and harsh version of heterosexual sex may
have covered for repressed homosexuality, and links him to the waves of
rejection, from the Beats as well as Hollywood types like Marlon Brando
and James Dean, rippling the 1950s. It's suggestive, though not
necessarily convincing, since unlike other jazzers--Davis and Charles
Mingus, for instance--Baker had no real contact with or interest in
other artistic subcultures.

Baker's critical reputation kept crashing after the Mulligan quartet
disbanded in 1954, and his drug use continued to escalate after that
time, when his heroin addiction began. By 1966, he had hit bottom: He
was badly beaten, probably because he ripped off a San Francisco drug
dealer, and his upper teeth had to be pulled. His embouchure wrecked,
his career, already smoldering, looked like it was finally in ruins. He
worked in a Redondo Beach gas station and applied for welfare. Against
the odds, record-label head Bock bought him dentures, and for more than
a year he worked--probably harder than he ever did before or after--to
rebuild something of the limpid trumpet sound that once made girls
shudder.

In 1959 he had relocated to Europe, where he stayed for the rest of his
life (except for a couple of brief homecomings) to avoid prosecution for
drug busts. Inevitably, he got busted in Europe instead. Gavin rightly
notes that the Europeans, especially the Italians, adopted Baker as a
damaged genius, an artist in need of understanding and patronage. It
didn't help. His trajectory careened mostly down; upward bursts of
musical lucidity flashed against a churn of mediocrities and an
ever-more-snarled life. His talent languished: He never expanded his
musical knowledge, nor did he really learn to arrange or compose or even
lead a band. He relied on producers and agents to direct his musical
life; he didn't bother conceptualizing his own creative frameworks. He
always demanded cash payments--no contracts, no royalties--on his
endless scramble to score. And as women revolved through his life or
fought over him and were beaten by him, he tried a few bouts at detox
but compressed even further into a junkie's two-dimensionality. By the
time he died, most American jazz fans thought he was already dead.

For this last half of his book, Gavin, even buoyed by research, swims
upstream against the cascading flow of a junkie's essential plot line.
For decades Baker is mostly chasing drugs, screwing anyone within reach,
tumbling downward creatively and personally, and alternating
manipulatively between victim and abuser. Except as a voyeur it's hard
to care, especially since, with exceptions I think even rarer than Gavin
does, Baker's music was generally worthless. Junk didn't make him a
musical superman; it simply drove him to make fast, sloppy recordings
with under-rehearsed bands, playing horn that was so unpredictable in
quality it could sound like an abysmal self-parody. Sympathetically
balanced as he tries to be, even Gavin can only cite a handful of
ex-sidemen as Baker's musical legacy of influence. Instead, he depicts
Baker as a kind of cultural icon rather than a cultural force.

It is one of history's ironies that Baker was resurrected after his
death by a film made shortly before it. Bruce Weber, a fashion
photographer famed for his homoerotic Calvin Klein and Ralph Lauren ads,
has a sharp eye for the scandalous, and decided to make Let's Get
Lost
when he saw Baker at the trumpeter's brief fling at an American
comeback in 1986. He fell for what an associate described as "beauty
that looked kind of destroyed." Weber bought him a French beatnik
wardrobe from a Paris designer, and paid him $12,500 for a performance
that Gavin describes: "eyelids sagging, slurring his words, all but
drooling.... Unless he got what he needed, [Weber's assistant] said, 'he
wouldn't have sat still a minute for us.'" The documentary refired
interest in Baker among boomers and Gen Xers, who responded to the
bathetic junkie glamour of his apparent frailty, personal and artistic,
just as their 1950s avatars had. Reissues of Baker's albums on CD have
gathered mass and sales since.

Which leaves us with Baker's mysterious death, long haloed by a host of
theories. Gavin rejects accident, reporting that "the window [of the
hotel room] slid up only about fifteen inches, making it difficult, if
not impossible, for a grown man to fall through accidentally."
Dismissing speculation that Baker might have lost his room key and tried
to climb the hotel's facade, Gavin says it's unlikely he could have gone
unnoticed on such a busy thoroughfare. He dismisses homicide, as did
Baker's remaining friends and the Dutch police, and concludes that Baker
was shooting his favorite speedballs and committed a sort of
passive-aggressive suicide by "opening a window and letting death come
to him.... [He] had died willfully of a broken heart."

That's a pretty sentimental final fade for a hard-core character like
Baker, who for all Gavin's determined nuance ultimately seems less rebel
than junkie. Maybe Gavin should have pondered Naked Lunch. Then
he might have ended his book with, say, Steve Allen's take, since Allen
was one of the many Baker burned: "When Chet started out, he had
everything. He was handsome, had a likable personality, a tremendous
musical gift. He threw it all away for drugs. To me, the man started out
as James Dean and ended up as Charles Manson."

Southern Exposure, which somehow looks--even in its third decade,
in the twenty-first century--as if very advanced high school students
had just stapled it together and put it on your doorstep (that's a
compliment...The Nation strives for that effect, too), is still
doing a fine job on its old beat: investigating the strange mix of
culture and corporatism that has made the South what it is today. By
extension, every issue poses the same basic question: What exactly is
America? In looking at the South in great detail over many decades,
Southern Exposure has begun to propose, although not explicitly,
some answers.

First, America is a place that advocates equality but thrives on
inequality. In the 2002 Spring and Summer issue, which is subtitled "The
South at War," James Maycock has published a piece on the black American
soldier's experience in Vietnam--especially for people who did not live
through the civil rights movement and that terrible Southeast Asian
conflict, this piece will be riveting. "I'm not a draft evader,"
declares one African-American draftee on reaching Canada. "I'm a runaway
slave."

America is also a place where the Marlboro Man has not abdicated, as
Stan Goff shows in his gonzo essay on Vietnam and American masculinity
(in fact, it has crossed my mind that all those ads may have been psy-ops prep for George W. Bush's ascendancy). And last, America is a
place that loves the Army. In its useful and unassailable roundup on the
Southern states and the war industry, Southern Exposure comes up
with important facts. The dollar amount of military contracts to Florida
companies alone last year amounted to $15.2 billion. The military, of
course, is a good place to have your money right now. For example,
Florida's education budget was slashed by 4.2 percent last year while
the stock of Northrop Grumman and Raytheon, two of the largest companies
with investments in Florida, were up 25 percent and 40 percent,
respectively. Nutshell portraits of thirteen states provide a real sense
of the give and take between politicians, the military and the job
market, and population in places where the military chooses to spend.

Note also: Of the top twenty-one cities involved in military production
in 2001, excepting Hartford, St. Louis, Indianapolis and Seattle, every
city on the list is in the South or in California. According to
Southern Exposure, 66 percent of the weapons sold to Israel under
the Foreign Military Sales program were produced in the South. The South
has helped situate America in the world today; that puts it in a unique
moral position. But after reading this issue of Southern
Exposure
, one really wonders: Do most Southerners care?

Yoga's Antiterror Position

After reading about B-29s and F-16s and macho men and Hellfire
missiles made in Orlando--of all places--I was happy to read a few
magazines that go to other extremes. Of the two big yoga magazines
available on the newsstand, Yoga Journal is the yogis' Vanity
Fair, and Yoga International is their Real Simple. We
can dispense with the latter except for the pretzel-position pictures,
but Yoga Journal is a very good niche magazine--good niche
publications take their subject and use it expansively, as a jumping-off
point. The June issue has an excellent and anthropologically important
piece by Marina Budhos on how yoga practice in the West, especially
among Americans, is changing the age-old practice in India, the
Americans behaving like cargo cultists in reverse.

Budhos found that many of the Indians in an Indian ashram (where, by the
way, the hatha yoga teacher was "a really tough Israeli") were attending
because they "were interested in teaching yoga as a career." Many of the
foreigners were simply having yoga fun on vacation--although, as I have
discovered while doing the tortoise position, the word "yoga" and the
word "fun" should never be used in the same sentence. Daniel Ghosal, an
Indian-American, says the Americans who come to India for yoga are seen
by the Indians as "kind of 'cracked.'" Indians don't think of yoga as a
social trend. "The lighting of candles and all that," Ghosal says
dismissively. "To Indians, it's just yoga."

"The Path of the Peaceful Warrior," by Anne Cushman, is also an amusing
piece. In it--after lighting a fire with newspapers in which she sees
headlines about terror and anthrax burning away, and after "folding into
the silence and surrender of a deep forward bend" (that's classic yoga
writing; you just have to push past it)--Cushman proposes a "Yogic
Battle Plan for the War on Terror." I suppose it's better than beefing
up your naval program at Newport News...

The first step: "Stop." I like that. That should be the entirety of an
Op-Ed piece on the Middle East crisis.

There is also "Contemplate death." Under that weighty heading, Cushman
includes this nice aperçu: "The American government's
instruction to 'Be on high alert, yet go about your ordinary life' may
have struck many people as all but impossible, but that paradoxical
injunction is actually...a core yogic practice." (Don't tell Rumsfeld!)
Under "Look Deeply," Cushman cites Tricycle editor James
Shaheen's remark that bin Laden was "inadvertently speaking the Buddhist
truth of interdependence when he said, 'Until there is peace in the
Middle East, there will be no peace for Americans at home.'" "Practice
nonviolence" is another step in the yogic battle; "take action," the
last. By the end, Yoga Journal is beginning to sound like the
editors of Southern Exposure.

Sad News

Earthtimes, the monthly environmental and social paper
spiritedly edited for twelve years by the effervescent Pranay Gupte, is
folding up shop after July for lack of funding. As Gupte said in a
farewell note to colleagues: "Undercapitalization is always bad for
business; zero capitalization is worse. Since my basement press is
beyond repair, I can't even print rupee notes any longer to sustain
Earthtimes." That's Gupte and the tone of Earthtimes,
too--in moments of pain and crisis, a quiet, self-deflating, sustaining
humor.

Blogs

The Roosevelts pumps PBS ratings, but that doesn’t make the network any less centrist.

September 19, 2014

Ntozake Shange, author of the groundbreaking choreopoem, for colored girls who have considered suicide when the rainbow is enuf, explains what the Ray Rice scandal means for black feminism.

September 18, 2014

The Nation and the Center for Community Change partnered together for an essay contest in which young people were asked to submit a photo they found meaningful and an essay explaining the significance of the photo in 500 words.

September 17, 2014

Eric on "The Beatles in Mono" and Reed on how the emphasis on optics skews our democratic priorities.

September 11, 2014

"Outside, the ground separates, / breaking open like sores..."

September 3, 2014

An incomplete list of ten of the best songs ever written about school.

September 3, 2014

A solo Koons exhibition, Danto wrote in 1989, was “a vision of an aesthetic hell.”

September 2, 2014

In honor of Labor Day, here’s a stab at the impossible task of naming the best songs ever written about working people.

August 29, 2014

In his new book, John Dean finally offers definitive answers to the questions “What did he know, and when did he know it?”
 

August 14, 2014

The quagmire of the Vietnam War was built on a “queasy foundation of fact and myth.”

July 31, 2014