Help

Nation Topics - Society

Topic Page

Articles

News and Features

Unions are gradually making fuller use of the Internet's capacities to
improve communication with their own staffs or members. But increasingly
they are also using the web to recruit new members or to establish
"virtual communities" of union supporters in arenas not yet amenable to
the standard collective-bargaining model.

Alliance@IBM (www.allianceibm.org) is an example of an effective
Net-supported minority union, operating without a demonstrated pro-union
majority and without a collective-bargaining contract at a traditional
nonunion company. The alliance provides information and advice to
workers at IBM through the web. A similar effort at a partially
organized employer is WAGE ("Workers at GE," www.geworkersunited.org), which draws on contributions from fourteen cooperating
international unions. The Microsoft-inflected WashTech
(www.washtech.org) and the Australian IT Workers Alliance
(www.itworkers-alliance.org) are open-source unions that are closer to
craft unions or occupational associations. Both are responsive to the
distinctive professional needs of these workers, such as access to a
variety of job experiences and additional formal education, and the
portability of high-level benefits when changing jobs.

The National Writers Union (www.nwu.org), a UAW affiliate, is another
example of a union virtually created off the Net. It provides
information and advice--including extensive job postings--to members,
and it lobbies on their behalf, most spectacularly in the recent Supreme
Court decision it won on freelance worker copyright rights. But most of
its members work without a collectively bargained contract.

In Britain, UNISON (the largest union in the country) and the National
Union of Students have a website that tells student workers their rights
and gives them advice about how to deal with workplace problems
(www.troubleatwork.org.uk). It is a particularly engaging and practical
illustration of how concrete problems can be addressed through Net
assistance.

Finally, for a more geographically defined labor community, take a look
at the website of the King County AFL-CIO (www.kclc.org), the Seattle
central labor council that uses the Net to coordinate its own business,
bring community and labor groups together for discussion and common
action, post messages and general information to the broader community,
and otherwise create a "virtual" union hall with much of the spirit and
dense activity that used to be common in actual union halls in major
cities.

The SAT has been on the ropes lately. The University of California
system has threatened to quit using the test for its freshman
admissions, arguing that the exam has done more harm than good. The
State of Texas, responding to a federal court order prohibiting its
affirmative action efforts, has already significantly curtailed the
importance of the SAT as a gatekeeper to its campuses. Even usually
stodgy corporate types have started to beat up on the SAT. Last year,
for example, a prominent group of corporate leaders joined the National
Urban League in calling upon college and university presidents to quit
placing so much stock in standardized admissions tests like the SAT,
which they said were "inadequate and unreliable" gatekeepers to college.

Then again, if the SAT is anything, it's a survivor. The SAT
enterprise--consisting of its owner and sponsor, the College Board, and
the test's maker and distributor, the Educational Testing Service--has
gamely reinvented itself over the years in myriad superficial ways,
hedging against the occasional dust-up of bad public relations. The SAT,
for example, has undergone name changes over the years in an effort to
reflect the democratization of higher education in America and
consequent changes in our collective notions about equal opportunity.
But through it all, the SAT's underlying social function--as a sorting
device for entry into or, more likely, maintenance of American
elitehood--has remained ingeniously intact, a firmly rooted icon of
American notions about meritocracy.

Indeed, the one intangible characteristic of the SAT and other
admissions tests that the College Board would never want to change is
the virtual equation, in the public's mind, of test scores and academic
talent. Like the tobacco companies, ETS and the College Board (both are
legally nonprofit organizations that in many respects resemble
profit-making enterprises) put a cautionary label on the product.
Regarding their SAT, the organizations are obliged by professional codes
of proper test practices to inform users of standardized admissions
tests that the exams can be "useful" predictors of later success in
college, medical school or graduate school, when used in conjunction
with other factors, such as grades.

But the true place of admissions testing in America isn't always so
appropriate. Most clear-eyed Americans know that results on the SAT,
Graduate Record Exam or the Medical College Admission Test are widely viewed as synonymous with academic talent in higher education. Whether it's true or not--and there's lots of evidence that it's not--is quite beside the point.

Given the inordinate weight that test scores play in the American
version of meritocracy, it's no surprise that federal courts have been
hearing lawsuits from white, middle-class law school applicants
complaining they were denied admission to law school even though their
LSAT scores were fifty points greater than a minority applicant who was
admitted; why neoconservative doomsayers warn that the academic quality
of America's great universities will plummet if the hordes of unwashed
(read: low test scores) are allowed entry; why articles are written
under titles like "Backdoor Affirmative Action," arguing that
de-emphasizing test scores in Texas and California is merely a covert
tactic of public universities to beef up minority enrollments in
response to court bans on affirmative action.

Indeed, Rebecca Zwick, a professor of education at the University of
California, Santa Barbara, and a former researcher at the Educational
Testing Service, wrote that "Backdoor Affirmative Action" article for
Education Week in 1999, implying that do-gooders who place less
emphasis on test scores in order to raise minority enrollments are
simply blaming the messenger. And so it should not be surprising that
the same author would provide an energetic defense of the SAT and
similar exams in her new book, Fair Game? The Use of Standardized
Admissions Tests in Higher Education.

Those, like Zwick, who are wedded to the belief that test scores are
synonymous with academic merit will like this concise book. They will
praise its 189 pages of text as, finally, a fair and balanced
demystification of the esoteric world of standardized testing. Zwick and
her publisher are positioning the book as the steady, guiding hand
occupying the sensible middle ground in an emotional debate that they
claim is dominated by journalists and other uninformed critics who don't
understand the complex subject of standardized testing. "All too
often...discussions of testing rely more on politics or emotion than on
fact," Zwick says in her preface. "This book was written with the aim of
equipping contestants in the inevitable public debates with some solid
information about testing."

If only it were true. Far from reflecting the balanced approach the
author claims, the book is thinly disguised advocacy for the status quo
and a defense of the hegemony of gatekeeping exams for college and
university admissions. It could be more accurately titled (without the
bothersome question mark) "Fair Game: Why America Needs the SAT."

As it stands, the research staff of the College Board and the
Educational Testing Service, Zwick's former employer, might as well have
written this book, as she trots out all the standard arguments those organizations have used for years to show why healthy doses of standardized testing are really good for American education. At almost every opportunity, Zwick quotes an ETS or College Board study in the most favorable light, couching it as the final word on a particular issue, while casting aspersion on
other studies and researchers (whose livelihoods don't depend on selling
tests) that might well draw different conclusions. Too often Zwick
provides readers who might be unfamiliar with the research about testing
with an overly simplistic and superficial treatment. At worst, she
leaves readers with grossly misleading impressions.

After providing a quick and dirty account of IQ testing at the turn of
the last century, a history that included the rabidly eugenic beliefs of
many of the early testmakers and advocates in Britain and the United
States ("as test critics like to point out," Zwick sneers), the author
introduces readers to one of the central ideologies of mental testing to
sort a society's young for opportunities for higher education. Sure,
mental testing has brought some embarrassing moments in history that we
moderns frown on nowadays, but the testing movement has had its good
guys too. Rather than being a tool to promote and protect the interests
of a society's most privileged citizens, the cold objectivity of
standardized testing remains an important goal for exercise of
democratic values.

According to this belief, standardized testing for admission to college
serves the interest of meritocracy, in which people are allowed to shine
by their wits, not their social connections. That same ideology, says
Zwick, drove former Harvard president James Bryant Conant, whom Zwick
describes as a "staunch supporter of equal opportunity," in his quest to
establish a single entrance exam, the SAT, for all colleges. Conant, of
course, would become the first chairman of the board of the newly formed
Educational Testing Service. But, as Nicholas Lemann writes in his 1999
book The Big Test: The Secret History of the American
Meritocracy
, Conant wasn't nearly so interested in widening
opportunity to higher education as Zwick might think. Conant was keen on
expanding opportunity, but, as Lemann says, only for "members of a tiny
cohort of intellectually gifted men." Disillusioned only with the form
of elitism that had taken shape at Harvard and other Ivy League
colleges, which allotted opportunities based on wealth and parentage,
Conant was nevertheless a staunch elitist, an admirer of the
Jeffersonian ideal of a "natural aristocracy." In Conant's perfect
world, access to this new kind of elitehood would be apportioned not by
birthright but by performance on aptitude tests. Hence the SAT, Lemann
writes, "would finally make possible the creation of a natural
aristocracy."

The longstanding belief that high-stakes mental tests are the great
equalizer of society is dubious at best, and at worst a clever piece of
propaganda that has well served the interests of American elites. In
fact, Alfred Binet himself--among the fathers of IQ testing, who would
invent the first version of the Stanford-Binet intelligence test, the
precursor to the modern SAT--observed the powerful relationship between
one's performance on his so-called intelligence test and a child's
social class, a phenomenon Binet described in his 1916 book The
Development of Intelligence in Children.

And it's the same old story with the SAT. Look at the college-bound high
school seniors of 2001 who took the SAT, and the odds are still firmly
stacked against young people of modest economic backgrounds' beating the
SAT odds. A test-taker whose parents did not complete high school can
expect to score fully 171 points below the SAT average, College Board
figures show. On the other hand, high schoolers whose moms and dads have
graduate degrees can expect to outperform the SAT average by 106 points.

What's more, the gaps in SAT performance between whites and blacks and
between whites and Mexican-Americans have only ballooned in the past ten
years. The gap between white and black test-takers widened five points
and eleven points on the SAT verbal and math sections, respectively,
between 1991 and 2001. SAT score gaps between whites and
Mexican-Americans surged a total of thirty-three points during that same
period.

For critics of the national testing culture, such facts are troubling
indeed, suggestive of a large web of inequity that permeates society and
the educational opportunities distributed neatly along class and race
lines, from preschool through medical school. But for Zwick, the notion
of fairness when applied to standardized admissions tests boils down to
a relatively obscure but standard procedure in her field of
"psychometrics," which is in part the study of the statistical
properties of standardized tests.

Mere differences in average test scores between most minority groups and
whites or among social classes isn't all that interesting to Zwick. More
interesting, she maintains, is the comparative accuracy of test scores
in predicting university grades between whites and other racial groups.
In this light, she says, the SAT and most standardized admissions tests
are not biased against blacks, Latinos or Native Americans. In fact, she
says, drawing on 1985 data from a College Board study that looked at
forty-five colleges, those minority groups earned lower grades in
college than predicted by their SAT scores--a classic case of
"overprediction" that substantiates the College Board claim that the SAT
is more than fair to American minorities. By contrast, if the SAT is
unfair to any group, it's unfair to whites and Asian-Americans, because
they get slightly better college grades than the SAT would predict,
Zwick suggests.

Then there's the odd circumstance when it comes to standardized
admissions tests and women. A number of large studies of women and
testing at the University of California, Berkeley, the University of
Michigan and other institutions have consistently shown that while women
(on average) don't perform as well on standardized tests as male
test-takers do, women do better than men in actual classroom work.
Indeed, Zwick acknowledges that standardized tests, unlike for most
minority groups, tend to "underpredict" the actual academic performance
of women.

But on this question, as with so many others in her book, Zwick's
presentation is thin, more textbookish than the thorough examination and
analysis her more demanding readers would expect. Zwick glosses over a
whole literature on how the choice of test format, such as
multiple-choice versus essay examinations, rewards some types of
cognitive approaches and punishes others. For example, there's evidence
to suggest that SAT-type tests dominated by multiple-choice formats
reward speed, risk-taking and other surface-level "gaming" strategies
that may be more characteristic of males than of females. Women and
girls may tend to approach problems somewhat more carefully, slowly and
thoroughly--cognitive traits that serve them well in the real world of
classrooms and work--but hinder their standardized test performance
compared with that of males.

Beyond Zwick's question of whether the SAT and other admissions tests
are biased against women or people of color is the perhaps more basic
question of whether these tests are worthwhile predictors of academic
performance for all students. Indeed, the ETS and the College Board sell
the SAT on the rather narrow promise that it helps colleges predict
freshman grades, period. On this issue, Zwick's presentation is not a
little pedantic, seeming to paint anyone who doesn't claim to be a
psychometrician as a statistical babe in the woods. Zwick quotes the
results of a College Board study published in 1994 finding that one's
SAT score by itself accounts for about 13 percent of the differences in
freshman grades; that one's high school grade average is a slightly
better predictor of college grades, accounting for about 15 percent of
the grade differences among freshmen; and that the SAT combined with
high school grades is a better predictor than the use of grades alone.
In other words, it's the standard College Board line that the SAT is
"useful" when used with other factors in predicting freshman grades. (It
should be noted that Zwick, consistent with virtually all College Board
and ETS presentations, reports her correlation statistics without
converting them into what's known as "R-squared" figures. In my view,
the latter statistics provide readers with a common-sense understanding
of the relative powers of high school grades and test scores in
predicting college grades. I have made those conversions for readers in
the statistics quoted above.)

Unfortunately, Zwick misrepresents the real point that test critics make
on the question of predictive validity of tests like the SAT. The
salient issue is whether the small extra gains in predicting freshman
grades that the SAT might afford individual colleges outweigh the social
and economic costs of the entire admissions testing enterprise, costs
borne by individual test-takers and society at large.

Even on the narrow question of the usefulness of the SAT to individual
colleges, Zwick does not adequately answer what's perhaps the single
most devastating critique of the SAT. For example, in the 1988 book
The Case Against the SAT, James Crouse and Dale Trusheim argued
compellingly that the SAT is, for all practical purposes, useless to
colleges. They showed, for example, that if a college wanted to maximize
the number of freshmen who would earn a grade-point average of at least
2.5, then the admissions office's use of high school rank alone as the
primary screening tool would result in 62.2 percent "correct"
admissions. Adding the SAT score would improve the rate of correct
decisions by only about 2 in 100. The researchers also showed,
remarkably, that if the admissions objective is broader, such as
optimizing the rate of bachelor's degree completion for those earning
grade averages of at least 2.5, the use of high school rank by itself
would yield a slightly better rate of prediction than if the SAT scores
were added to the mix, rendering the SAT counterproductive. "From a
practical viewpoint, most colleges could ignore their applicants' SAT
score reports when they make decisions without appreciably altering the
academic performance and the graduation rates of students they admit,"
Crouse and Trusheim concluded.

At least two relatively well-known cases of colleges at opposite ends of
the public-private spectrum, which have done exactly as Crouse and
Trusheim suggest, powerfully illustrate the point. Consider the
University of Texas system, which was compelled by a 1996 federal
appeals court order, the Hopwood decision, to dismantle its
affirmative-action admissions programs. The Texas legislature responded
to the threat of diminished diversity at its campuses with the "top 10
percent plan," requiring public universities to admit any student
graduating in the top 10 percent of her high school class, regardless of
SAT scores.

Zwick, of course, is obliged in a book of this type to mention the Texas
experience. But she does so disparagingly and without providing her
readers with the most salient details on the policy's effects in terms
of racial diversity and the academic performance of students. Consider
the diversity question. While some progressives might have first
recoiled at the new policy as itself an attack on affirmative action,
that has not been the case. In fact, at the University of Texas at
Austin, the racial diversity of freshman classes has been restored to
pre-Hopwood levels, after taking an initial hit. Indeed, the
percentage of white students at Austin reached a historic low point in
2001, at 61 percent. What's more, the number of high schools sending
students to the state's flagship campus at Austin has significantly
broadened. The "new senders" to the university include more inner-city
schools in Dallas, Houston and San Antonio, as well as more rural
schools than in the past, according to research by UT history professor
David Montejano, among the plan's designers.

But the policy's impact on academic performance at the university might
be even more compelling, since that is the point upon which
neoconservative critics have been most vociferous in their condemnations
of such "backdoor" affirmative action plans that put less weight on test
scores. A December 1999 editorial in The New Republic typified
this road-to-ruin fiction: Alleging that the Texas plan and others like
it come "at the cost of dramatically lowering the academic
qualifications of entering freshmen," the TNR editorial warned,
these policies are "a recipe for the destruction of America's great
public universities."

Zwick, too, neglects to mention the facts about academic performance of
the "top 10 percenters" at the University of Texas, who have proven the
dire warnings to be groundless. At every SAT score interval, from less
than 900 to scores of 1,500 and higher, in the year 2000, students
admitted without regard to their SAT score earned better grades than
their non-top 10 percent counterparts, according to the university's
latest research report on the policy.

Or, consider that the top 10 percenters average a GPA of 3.12 as
freshmen. Their SAT average was about 1,145, fully 200 points lower than
non-top 10 percent students, who earned slightly lower GPAs of 3.07. In
fact, the grade average of 3.12 for the automatically admitted students
with moderate SAT scores was equal to the grade average of non-top 10
percenters coming in with SATs of 1,500 and higher. The same pattern has
held across the board, and for all ethnic groups.

Bates College in Lewiston, Maine, is one case of a college that seemed
to anticipate the message of the Crouse and Trusheim research. Bates ran
its own numbers and found that the SAT was simply not a sufficiently
adequate predictor of academic success for many students and abandoned
the test as an entry requirement several years ago. Other highly
selective institutions have similar stories to tell, but Bates serves to
illustrate. In dropping the SAT mandate, the college now gives students
a choice of submitting SATs or not. But it permits no choice in
requiring that students submit a detailed portfolio of their actual work
and accomplishments while in high school for evaluation, an admissions
process completed not just by admissions staff but by the entire Bates
faculty.

As with the Texas automatic admission plan, Zwick would have been
negligent not to mention the case of Bates, and she does so in her
second chapter; but it's an incomplete and skewed account. Zwick quotes
William Hiss, the former dean of admissions at Bates, in a 1993
interview in which he suggests that the Bates experience, while perhaps
appropriate for a smaller liberal arts college, probably couldn't be
duplicated at large public universities. That quote well serves Zwick's
thesis that the SAT is a bureaucratically convenient way to maintain
academic quality at public institutions like UT-Austin and the
University of California. "With the capability to conduct an intensive
review of applications and the freedom to consider students' ethnic and
racial backgrounds, these liberal arts colleges are more likely than
large university systems to succeed in fostering diversity while toeing
the line on academic quality," Zwick writes.

But Zwick neglects to mention that Hiss has since disavowed his caveats
about Bates's lessons for larger public universities. In fact, Hiss, now
a senior administrator at the college, becomes palpably irritated at
inequalities built into admissions systems that put too much stock in
mental testing. He told me in a late 1998 interview, "There are twenty
different ways you can dramatically open up the system, and if you
really want to, you'll figure out a way. And don't complain to me about
the cost, that we can't afford it."

Zwick punctuates her brief discussion of Bates and other institutions
that have dropped the SAT requirement by quoting from an October 30,
2000, article, also in The New Republic, that purportedly
revealed the "dirty little secret" on why Bates and other colleges have
abandoned the SAT. The piece cleverly observed that because SAT
submitters tend to have higher test scores than nonsubmitters, dropping
the SAT has the added statistical quirk of boosting SAT averages in
U.S. News & World Report's coveted college rankings. That
statistical anomaly was the smoking gun the TNR reporter needed
to "prove" the conspiracy.

But to anyone who has seriously researched the rationales colleges have
used in dropping the SAT, the TNR piece was a silly bit of
reporting. At Bates, as at the University of Texas, the SAT
"nonsubmitters" have performed as well or better academically than
students who submitted SATs, often with scores hundreds of points lower
than the SAT submitters. But readers of Fair Game? wouldn't know
this.

One could go on citing many more cases in which Zwick misleads her
readers through lopsided reporting and superficial analysis, such as her
statements that the Graduate Record Exam is about as good a predictor of
graduate school success as the SAT is for college freshmen (it's not,
far from it), or her overly optimistic spin on the results of many
studies showing poor correlations between standardized test scores and
later career successes.

Finally, Zwick's presentation might have benefited from a less
textbookish style, with more enriching details and concrete examples.
Instead, she tries to position herself as a "just the facts" professor
who won't burden readers with extraneous contextual details or accounts
of the human side of the testing culture. But like the enormously
successful--at least in commercial terms--standardized tests themselves,
which promote the entrenched belief in American society that genuine
learning and expert knowledge are tantamount to success on Who Wants
to Be a Millionaire
-type multiple-choice questions, books like Fair Game? might be the standardized account that some readers really want.

Speech to The Democratic National Committee--Western Caucus
Saturday, May 25, 2002
Seattle, Washington

Did you know that the mere act of asking what kind of warning members of
the Bush Administration may have received about a 9/11-like attack is
just clever hype by that sneaky liberal media conspiracy? So goes the
argument of the regular National Review seat on Communist News
Network liberal media program, Reliable Sources. Recently, host
(and Washington Post media reporter) Howard Kurtz decided to fill
the chair not with his favorite guest/source, NR editor Rich Lowry, or the much-invited NR
Online
editor, Jonah Goldberg, but with the relatively obscure
NR managing editor, Jay Nordlinger. Nordlinger explained, "The
story is surprisingly slight," blown up by a liberal media fearing Bush
was getting "a free ride." Give the man points for consistency. The Bush
White House's exploitation of 9/11 to fatten Republican coffers via the
sale of the President's photo that fateful day--scurrying from safe
location to safe location--was also, in Nordlinger's view, "another
almost nonstory."

Nordlinger's complaint echoed the even stronger contention of another
Kurtz favorite, Andrew Sullivan. The world-famous
gaycatholictorygapmodel took the amazing position that potential
warnings about a terrorist threat that would kill thousands and land us
in Afghanistan was "not a story" at all. Sounding like a Karl Rove/Mary
Matalin love child, Sullivan contended, "The real story here is the
press and the Democrats' need for a story about the war to change the
climate of support for the President."

But Sullivan at least deserves our admiration for expertly spinning
Kurtz regarding The New York Times Magazine's decision to cut him
loose. Echoing Sullivan's PR campaign--and with a supportive quote from,
uh, Rich Lowry--Kurtz framed the story entirely as one of Times
executive editor Howell Raines avenging Sullivan's obsessive attacks on
the paper's liberal bias. OK, perhaps the standards for a Post
writer tweaking the Times top dog are not those of, say, Robert
Caro on Robert Moses, but where's the evidence that Raines was even
involved? The paper had plenty of reasons to lose Sullivan even if his
stupendously narcissistic website never existed. Sullivan's Times
work may have been better disciplined than his "TRB" columns in the
notsoliberal New Republic (before he was replaced by editor Peter
Beinart) and certainly than the nonsense he posts online, but it still
must have embarrassed the Newspaper of Record. As (now Times Book
Review
columnist) Judith Shulevitz pointed out in a critique of his
"dangerously misleading" paean to testosterone, Sullivan was permitted
to "mix up his subjective reactions with laboratory work." Stanford
neurobiologist Robert Sapolsky told Shulevitz at the time, Sullivan "is
entitled to his fairly nonscientific opinion, but I'm astonished at the
New York Times." The Andrew Sullivan Principles of Pre-Emptive
Sexual Disclosure also embarrassed the magazine when he used its pages
to out as gay two Clinton Cabinet members and liberal Democrats like
Rosie O'Donnell. (I imagine he came to regret this invasion of privacy
when his own life became tabloid fare.) Meanwhile, Sullivan's
McCarthyite London Sunday Times column about September 11--in
which he waxed hysterical about the alleged danger of a pro-terrorist
"Fifth Column" located in the very city that suffered the attack--should
have been enough to put off any discerning editor forever. Yet the myth
of his martyrdom continues. Sullivan's website carries the vainglorious
moniker "unfit to print." For once, he's right.

* * *

Sorry, I know enough can be more than enough, but this quote of Sully's
is irresistible: "I ignored Geoffrey Nunberg's piece in The American
Prospect
in April, debunking the notion of liberal media bias by
numbers, because it so flew in the face of what I knew that I figured
something had to be wrong." When a conservative pundit "knows" something
to be true, don't go hassling him with contrary evidence. It so happens
that linguist Geoffrey Nunberg did the necessary heavy lifting to
disprove perhaps the one contention in Bernard Goldberg's book
Bias the so-called liberal media felt compelled--perhaps out of
misplaced generosity--to accept: that the media tend to label
conservatives as such more frequently than alleged liberals. Tom
Goldstein bought into it in Columbia Journalism Review. So did
Jonathan Chait in TNR. Howard Kurtz and Jeff Greenfield let it go
unchallenged on Communist News Network. Meanwhile, Goldberg admits to
"knowing," Sullivan style, happily ignorant of any relevant data beyond
his own biases. He did no research, he says, because he did not want his
book "to be written from a social scientist point of view."

Unfortunately for Bernie, Nunberg discovered that alleged liberals are
actually labeled as such by mainstream journalists more frequently than
are conservatives. This is true for politicians, for actors, for
lawyers, for everyone--even institutions like think tanks and pressure
groups. The reasons for this are open to speculation, but Nunberg has
the numbers. A weblogger named Edward Boyd ran his own set of numbers
that came out differently, but Nunberg effectively disposed of Boyd's
(honest) errors in a follow-up article for TAP Online. In a truly
bizarre Village Voice column, Nat Hentoff recently sought to ally
himself with the pixilated Goldberg but felt a need to add the
qualifier, "The merits of Goldberg's book aside..." Actually, it's no
qualifier at all. Goldberg's worthless book has only one merit, which
was to inspire my own forthcoming book refuting it. (Hentoff
mischaracterizes that, too.) Meanwhile, the merits of Hentoff's column
aside, it's a great column.

* * *

Speaking of ex-leftists, what's up with Christopher Hitchens calling
Todd Gitlin and me "incurable liberals"? Since when is liberalism
treated as something akin to a disease in this, America's oldest
continuously published liberal magazine? Here's hoping my old friend
gets some treatment for his worsening case of incurable Horowitzism. (Or
is it Sullivanism? Hentoffism? Is there a Doctor of Philosophy in the
house?)

Meanwhile, I've got a new weblog with more of this kind of thing at
www.altercation.msnbc.com. Check it every day, or the terrorists
win...

"Death Star," "Get Shorty," "Fat Boy"--the revelation of Enron's trading
schemes in California have turned the Enron scandals virulent again.
Just when the White House thought the disease was in remission and
relegated to the business pages, the California scams exposed more of a
still-metastasizing cancer of corporate corruption.

Internal Enron memos reveal that it and other companies preyed on
California's energy crisis, helping to manufacture shortages and using
sham trades to drive up prices. The somnambulant Federal Energy
Regulatory Commission (FERC)--headed by Pat Wood III, "Kenny Boy" Lay's
handpicked chairman--decided that its initial finding of no market
manipulation in California was inoperable and opened a broader
investigation. With stocks plummeting and lawsuits piling up, CEOs at
Dynegy and CMS Energy resigned, as did heads of trading at Reliant
Resources and CMS.

The Bush Administration was directly implicated as the White House's
Enron stonewall began to collapse. A reluctant Joseph Lieberman,
chairman of the Senate Governmental Affairs Committee, finally got
sufficient spine to issue subpoenas, stimulating the White House to
release more documents about its contacts with Enron. These showed that
the White House had lied to House investigators when it reported only
six contacts between Enron officials and the White House energy task
force. The incomplete White House submissions now admit four times that
number, with more surely to come.

Lay and the Enron executives were pressing Vice President Cheney not
only to influence the President's energy policy but also to oppose price
controls on electricity in California, even as they were gaming the
market. Cheney and Bush responded to their leading contributor by
publicly scorning price controls, while White House aides encouraged the
energy industry to organize an ad campaign in California against
controls. Cheney surely felt comfortable with Enron's shady side: As we
recently learned, when he was CEO of Halliburton and its profits were
declining, his accountants--the ubiquitous Arthur Andersen--suddenly
started counting as revenue a portion of payments that were in dispute,
without informing investors of the change.

The Administration has painted Enron as a business, not a political,
scandal. Now it is apparent that the scandal is political and
economic, showing the problems of a system with too little
accountability and too much corporate influence both in the White House
and on Capitol Hill. And with the United States having to import more
than $1 billion a day in capital to cover trade deficits, the scandals
are already a drag on investment, growth and jobs.

Neither the Administration, Congress nor the business lobby has yet
awakened to the perils. Bush retains as Army Secretary former Enron
executive Tom White, who claims no knowledge that his subsidiary was
involved in the sham trading schemes (although his own bonuses were
undoubtedly based in part on the inflated revenues that resulted). Big
Five accounting firms lobbyist Harvey Pitt remains head of the SEC, even
after repeatedly traducing elementary ethics by meeting privately with
representatives of companies under investigation by his agency. Wood
remains the head of FERC, even as legislators call on him to recuse
himself from the California investigation. Bush and House Republicans
continue to resist sensible reforms. The business and accounting lobby,
in a victory of ideology over common sense, has mobilized against
anything with teeth.

Beltway conventional wisdom dismisses the political fallout of the Enron
scandals. But Americans are furious at executives who betray their
workers and mislead small investors while plundering their companies.
Thus far their anger hasn't fixed on Washington, but it may if no one is
held accountable. It's long past time for Senate Democrats to rouse
themselves, demand the heads of White and Pitt and launch a scorching
public investigation of the Administration's complicity with Enron in
California and elsewhere. Any real reform will require displacing Enron
conservatives, with their mantra of "self-regulation" and their corrupt
politics of money. With the revelations continuing and elections coming
up, progressives should be mobilizing independently to name names,
exposing those who shield the powerful. If voters learn who the culprits
are, Enron may end up reflecting the "genius" not of capitalism but of
democracy--the people's ability to clean out the stables when the stench
gets too foul.

In the past two months I have talked with many people who have a keen
interest in whether the Senate will decide to ban therapeutic cloning.
At a conference at a Philadelphia hospital, a large number of people,
their bodies racked with tremors from Parkinson's disease, gathered to
hear me speak about the ethics of stem cell research. A few weeks
earlier I had spoken to another group, many of whom were breathing with
the assistance of oxygen tanks because they have a genetic disease,
Alpha-1 antitrypsin deficiency, that destroys their lungs and livers.
Earlier still I met with a group of parents whose children are paralyzed
as a result of spinal cord injuries.

At each meeting I told the audience there was a good chance that the
government would criminalize research that might find answers to their
ailments if it required using cloned human embryos, on the grounds that
research using such embryos is unethical. The audience members were
incredulous. And well they should have been. A bizarre alliance of
antiabortion religious zealots and technophobic neoconservatives along
with a smattering of scientifically befuddled antibiotech progressives
is pushing hard to insure that the Senate accords more moral concern to
cloned embryos in dishes than it does to kids who can't walk and
grandmothers who can't hold a fork or breathe.

Perhaps it should come as no surprise that George W. Bush and the House
of Representatives have already taken the position that any research
requiring the destruction of an embryo, cloned or otherwise, is wrong.
This view derives from the belief, held by many in the Republican camp,
that personhood begins at conception, that embryos are people and that
killing them to help other people is simply wrong. Although this view
about the moral status of embryos does not square with what is known
about them--science has shown that embryos require more than genes in
order to develop, that not all embryos have the capacity to become a
person and that not all conception begins a life--it at least has the
virtue of moral clarity.

But aside from those who see embryos as tiny people, such clarity of
moral vision is absent among cloning opponents. Consider the views of
Leon Kass, William Kristol, Charles Krauthammer and Francis Fukuyama.
Each says he opposes research involving the cloning of human embryos.
Each has been pushing furiously in the media and in policy circles to
make the case that nothing could be more morally heinous than harvesting
stem cells from such embryos. And each says that his repugnance at the
idea of cloning research has nothing to do with a religiously based view
of what an embryo is.

The core of the case against cloning for cures is that it involves the
creation, to quote the latest in a landslide of moral fulminations from
Krauthammer, "of a human embryo for the sole purpose of using it for its
parts...it will sanction the creation of an entire industry of embryo
manufacture whose explicit purpose is...dismemberment for research."
Sounds like a very grim business indeed--and some progressives, notably
Jeremy Rifkin and Norman Mailer, have sounded a similar alarm as they
have joined the anticloning crusade.

From the secular viewpoint, which Krauthammer and like-minded cloning
opponents claim to hold, there is no evidence for the position that
embryonic clones are persons or even potential persons. As a simple fact
of science, embryos that reside in dishes are going nowhere. The
potential to become anything requires a suitable environment. Talk of
"dismemberment," which implicitly confers moral status on embryos,
betrays the sort of faith-based thinking that Krauthammer says he wants
to eschew. Equally ill-informed is the notion that equivalent medical
benefits can be derived from research on adult stem cells; cloned
embryonic stem cells have unique properties that cannot be duplicated.

The idea that women could be transformed into commercial egg farms also
troubles Krauthammer, as well as some feminists and the Christian
Medical Association. The CMA estimates that to make embryonic stem-cell
cloning work, more than a billion eggs would have to be harvested. But
fortunately for those hoping for cures, the CMA is wrong: Needed now for
cloned embryonic stem-cell research are thousands of eggs, not billions.
While cloning people is a long shot, cloning embryos is not, and it
should be possible to get the research done either by paying women for
their eggs or asking those who suffer from a disease, or who know
someone they care about who has a disease, to donate them. Women are
already selling and donating eggs to others who are trying to have
babies. Women and men are also donating their kidneys, their bone marrow
and portions of their livers to help others, at far greater risk to
themselves than egg donation entails. And there is no reason that embryo
splitting, the technique used today in animals, could not provide the
requisite embryo and cloned stem-cell lines to treat all in need without
a big increase in voluntary egg donation from women.

In addition to conjuring up the frightening but unrealistic image of
women toiling in Dickensian embryo-cloning factories, those like
Krauthammer, who would leave so many senior citizens unable to move
their own bodies, offer two other moral thoughts. If we don't draw the
line at cloning for cures, there will soon enough be a clone moving into
your neighborhood; and besides, it is selfish and arrogant to seek to
alter our own genetic makeup to live longer.

The reality is that cloning has a terrible track record in making
embryos that can become fetuses, much less anything born alive. The most
recent review of cloning research shows an 85 percent failure rate in
getting cow embryos to develop into animals. And of those clones born
alive, a significant percentage, more than a third, have serious
life-threatening health problems. Cloned embryos have far less potential
than embryos created the old-fashioned way, or even frozen embryos, of
becoming anything except a ball of cells that can be tricked into
becoming other cells that can cure diseases. Where Krauthammer sees
cloned embryos as persons drawn and quartered for their organs, in
reality there exists merely a construct of a cell that has no potential
to become anything if it is kept safely in a dish and almost no
potential to develop even if it is put into a womb. Indeed, current work
on primate cloning has been so unproductive, which is to say none made
to date, that there is a growing sentiment in scientific circles that
human cloning for reproduction is impossible. The chance of anyone
cloning a full-fledged human is almost nil, but in any case there is no
reason that it cannot be stopped simply by banning the transfer of these
embryos into wombs.

But should we really be manipulating our genes to try to cure diseases
and live longer? Kass and Fukuyama, in various magazine pieces and
books, say no--that it is selfish and arrogant indulgence at its worst.
Humanity is not meant to play with its genes simply to live longer and
better.

Now, it can be dangerous to try to change genes. One young man is dead
because of an experiment in gene therapy at my medical school. But the
idea that genes are the defining essence of who we are and therefore
cannot be touched or manipulated recalls the rantings of Gen. Jack D.
Ripper in Doctor Strangelove, who wanted to preserve the
integrity of his precious bodily fluids. There's nothing inherently
morally wrong with trying to engineer cells, genes and even cloned
embryos to repair diseases and terminal illnesses. Coming from those who
type on computers, wear glasses, inject themselves with insulin, have
had an organ transplant, who walk with crutches or artificial joints or
who have used in vitro fertilization or neonatal intensive care to
create their children, talk of the inviolate essence of human nature and
repugnance at the "manufactured" posthuman is at best disingenuous.

The debate over human cloning and stem cell research has not been one of
this nation's finest moral hours. Pseudoscience, ideology and plain
fearmongering have been much in evidence. If the discussions were merely
academic, this would be merely unfortunate. They are not. The flimsy
case against cloning for cures is being brought to the White House, the
Senate and the American people as if the opponents hold the moral high
ground. They don't. The sick and the dying do. The Senate must keep its
moral priorities firmly in mind as the vote on banning therapeutic
cloning draws close.

One of the biggest problems Palestine's supporters face is
anti-Semitism.

As corporations push new medicines, sound and affordable healthcare
suffers.

In the past two decades, Richard Rodriguez has offered us a gamut of
anecdotes, mostly about himself in action in an environment that is not
always attuned to his own inner life. These anecdotes have taken the
form of a trilogy that started in 1983 with the classic Hunger of
Memory
, continued in 1993 with Days of Obligation and concludes now with his new book Brown:
The Last Discovery of America.
This isn't a trilogy about history.
It isn't about sociology or politics either, at least in their most
primary senses. Instead, it is a sustained meditation on Latino life in
the United States, filled with labyrinthine reflections on philosophy
and morality.

Rodriguez embraces subjectivity wholeheartedly. His tool, his
astonishing device, is the essay, and his model, I believe, is
Montaigne, the father of the personal essay and a genius at taking even
an insect tempted by a candle flame as an excuse to meditate on the
meaning of life, death and everything in between. Not that Montaigne is
Rodriguez's only touchstone. In Brown he chants to Alexis de
Tocqueville and James Baldwin as well. And in the previous installments
of his trilogy, particularly owing to his subject matter, he has emerged
as something of a successor to Octavio Paz.

The other trunk of this genealogical tree I'm shaping is V.S. Naipaul,
or at least he appears that to me, a counterpoint, as I reread
Rodriguez's oeuvre. They have much in common: They explore a
culture through its nuances and not, as it were, through its
high-profile iconography; they are meticulous littérateurs,
intelligent, incessantly curious; and, more important, everywhere they
go they retain, to their honor, the position of the outsider looking in.
Rodriguez, in particular, has been a Mexican-American but not a
Chicano--that is, he has rejected the invitation to be a full part of
the community that shaped him. Instead, he uses himself as a looking
glass to reflect, from the outside, on who Mexicans are, in and beyond
politics. This, predictably, has helped fill large reservoirs of
animosity against him. I don't know of any other Latino author who
generates so much anger. Chicanos love to hate him as much as they hate
to love him.

Why this is so isn't difficult to understand: He is customarily critical
of programs and policies that are seen as benefactors to the community,
for example, bilingual education and affirmative action, which, in his
eyes, have only balkanized families, neighborhoods and cities. In
Hunger of Memory he portrayed himself as a Scholarship Boy who
benefited from racial profiling. He reached a succinct conclusion: Not
race but individual talent should be considered in a person's
application for school or work--not one's skin color, last name or
country of origin, only aptitude. Naipaul too can play the devil: His
journeys through India and the Arab world, even through the lands of El
Dorado, are unsettling when one considers his rabid opinions on the
"uncivilized" natives. But Naipaul delivers these opinions with
admirable grace and, through that, makes his readers rethink the
colonial galaxy, revisit old ideas. In that sense, Naipaul and Rodriguez
are authors who force upon us the necessity to sharpen our own ideas. We
read them, we agree and disagree with them, so as to fine-tune our own
conception of who we are. They are of the kind of writer who first
infuriates, then unsettles us. What they never do is leave the reader
unchanged. For that alone, one ought to be grateful.

Apparently, the trilogy came into being after Rodriguez's agent, as the
author himself puts it in "Hispanic," the fifth chapter of Brown,
"encouraged from me a book that answers a simple question: What do
Hispanics mean to the life of America? He asked me that question several
years ago in a French restaurant on East Fifty-seventh Street, as I
watched a waiter approach our table holding before him a shimmering
îles flottantes."

The image of îles flottantes is a fitting one, I believe,
since the Latino mosaic on this side of the border (Rodriguez often
prefers to use the term "Hispanic" in his pages) might be seen as
nothing if not an archipelago of self-sufficient subcultures: Cuban,
Puerto Rican, Mexican, Salvadoran, Nicaraguan, Dominican... and the
whole Bolivarian range of possibilities. Are these islands of identity
interconnected? How do they relate to one another? To what extent are a
Brazilian in Tallahassee and a Mexicano in Portland, Oregon, kindred
spirits?

Judging by his answer, Rodriguez might have been asked the wrong
question. Or else, he might have chosen to respond impractically. For
the question that runs through the three installments is, How did
Hispanics become brown? His belief is that brown, as a color, is the
sine qua non of Latinos, and he exercises it as a metaphor of mixture.
"Brown as impurity," he reasons. "I write of a color that is not a
singular color, not a strict recipe, not an expected result, but a color
produced by careless desire, even by accident." It is the color of
mestizaje, i.e., the miscegenation that shaped the Americas from
1492 onward, as they were forced, in spite of themselves, into modern
times. It is the juxtaposition of white European and dark aboriginal, of
Hernán Cortés and his mistress and translator, La
Malinche. And it is also the so-called raza cósmica that
Mexican philosopher José Vasconcelos talked about in the early
twentieth century, a master race that, capitalizing on its own impurity,
would rise to conquer the hemisphere, if not the entire globe.

But have Hispanics really become brown on the Technicolor screen of
America? Rodriguez is mistaken, I'm afraid. The gestation of race in the
Caribbean, from Venezuela to Mexico and the Dominican Republic, has a
different tint, since African slaves were brought in to replace Indians
for the hard labor in mines and fields, and their arrival gave birth to
other racial mixtures, among them those termed "mulattoes" and "zambos."
Argentina, on the other hand, had a minuscule aboriginal population when
the Spanish viceroys and missionaries arrived. The gauchos, a sort of
cowboy, are at the core of its national mythology, as can be seen in the
works of Domingo Faustino Sarmiento, José Hernández and
Jorge Luis Borges. "Brown," in Rodriguez's conception, might be the
color of Mexicans in East LA, but surely not of Cubans in Miami. Some
Latinos might have become brown, but not all. And then again, what does
"brown" really mean? Rodriguez embraces it as a metaphor of impurity.
Mestizos are crossbreeds, they are impure, and impurity is beautiful.
But the term "brown" has specific political connotations as well. It is,
to a large extent, a byproduct of the civil rights era, the era of
César Chávez and the Young Lords, coined in reaction to
the black-and-white polarity that played out in Washington policy
corridors and the media: Brown is between white and black, a third
option in the kaleidoscope of race. A preferred term in the Southwest
was La Raza, but "brown" also found its way into manifestoes, political
speeches, legal documents and newspaper reports.

Rodriguez isn't into the Chicano movement, though. My gut instinct is
that he feels little empathy toward the 1960s in general, let alone
toward the Mexican-American upheaval. His views on la
hispanicidad
in America are defined by his Mexican ancestry and by
his residence in San Francisco, where he has made his home for years. He
is disconnected from the Caribbean component of Latinos, and, from the
reaction I see in readers on the East Coast, the Caribbean Latinos are
also uninvolved with him.

Furthermore, Rodriguez limits himself to the concept of miscegenation,
but only at the racial level. What about promiscuity in language, for
example? Promiscuity might be a strong word, but it surely carries the
right message. Rodriguez's English is still the Queen's English:
overpolished, uncorrupted, stainless. How is it that he embraces
mestizaje but has little to say about Spanglish, that
disgustingly gorgeous mix of Spanish and English that is neither one nor
the other? Isn't that in-betweenness what America is about today? On the
issue of language, I have a side comment: I find it appalling that
Rodriguez's volumes are not available in Spanish to Mexicans and other
Latinos. Years ago, a small Iberian press, Megazul, released Hambre
de memoria
in a stilted, unapologetically Castilian translation.
That, clearly, was the wrong chord to touch, when the author's resonance
is closer to San Antonio than to San Sebastián. How much longer
need Mexicans wait to read the work en español mexicano of
a canonical figure, whose lifelong quest has been to understand Mexicans
beyond the pale? The question brings us back to Paz and his "The Pachuco
and Other Extremes," the first chapter in his masterpiece The
Labyrinth of Solitude
, released in 1950. It has angered Chicanos for
decades, and with good reason: This is an essay that distorts Mexican
life north of the border. Paz approached the pachuco--a social type of
Mexican youth in Los Angeles in the 1940s who fashioned a specific
lingo, and idiosyncrasies that Elvis Presley appropriated obliquely--as
a deterioration of the Mexican psyche. In his work, Rodriguez has
established a sort of colloquy with Paz, though not a direct address. He
embraces Paz's cosmopolitanism, his openness, and perceives him as a
Europeanized intellectual invaluable in the quest to freshen up Mexican
elite culture. But he refuses to confront Paz's anti-Chicanismo, and in
general Paz's negative views on Latinos in the United States. Once, for
instance, when asked what he thought about Spanglish, Paz responded that
it was neither good nor bad, "it is simply an aberration." In any case,
reading both authors on US-Mexican relations is an unpredictable,
enlightening catechism, filled with detours. While Mexicans might not
like to hear what Rodriguez has to say about them and about himself (he
has talked of "hating Mexico"), at least they will be acquainted with
his opinions.

All this is to say that Rodriguez's response to "What do Hispanics mean
to the life of America?" is partial at best. The trilogy shows a mind
engaged, but its subject is almost unmovable. Hunger of Memory
was an autobiographical meditation set in the United States as the
country was about to enter the Reagan era. It denounced a stagnant
society, interested in the politics of compassion more than in the
politics of equality, a society with little patience for Mexicans.
Days of Obligation was also about los Estados Unidos as
the first Bush presidency was approaching its end. By then the Reagan
mirage was officially over. We were about to enter another house of
mirrors under the tutelage of Bill Clinton. And this third installment
of the trilogy arrives in bookstores at a time when the melting pot,
la sopa de culturas, is boiling again, with xenophobia against
Arabs at a height, and Latinos, already the largest minority according
to the latest US Census data--35.3 million strong by late 2000, if one
counts only those officially registered--are still on the fringes,
fragmented, compartmentalized, more a sum of parts than a whole.

These changes are visible only through inference in the trilogy;
Rodriguez seldom makes use of political facts. He lives in a dreamlike
zone, a universe of ideas and sensations and paradox. Somewhere in
Brown he announces:

A few weeks ago, in the newspaper (another day in the multicultural
nation), a small item: Riot in a Southern California high school.
Hispanic students protest, then smash windows, because African-American
students get four weeks for Black History month, whereas Hispanics get
one. The more interesting protest would be for Hispanic students to
demand to be included in Black History month. The more interesting
remedy would be for Hispanic History week to include African history.

This sums up Rodriguez's approach: a micromanagement of identity
delivered periodically from the same viewpoint. Or has the viewpoint
changed? It is possible to see a growing maturity by reading the trilogy
chronologically. He started as an antisegregationist, a man interested
in assimilation of Mexicans into the larger landscape of America. His
feelings toward Mexico and toward his homosexuality were tortured at the
time. These became clear, or at least clearer, in the second
installment, in which a picture of a San Francisco desolated by AIDS and
an argument with the author's own mexicanidad as personified by
his father, among other changes, were evident. Assimilation was still a
priority, but by the 1990s Rodriguez had ceased to be interested in such
issues and was more attracted to his own condition as a public gay
Latino.

Brown is again about assimilation, but from a perspective that
asserts America is a country shaped by so many interbred layers of
ethnicity that nothing is pure anymore. At one point, he describes the
conversation of a couple of girls one afternoon on Fillmore Street. He
renders them and their dialogue thus: "Two girls. Perhaps sixteen.
White. Anglo, whatever. Tottering on their silly shoes. Talking of boys.
The one girl saying to the other: ...His complexion is so cool, this
sort of light--well, not that light." And Rodriguez ends: "I realized my
book will never be equal to the play of the young." This need to capture
what surrounds him is always evident, although it isn't always
successful, because he is an intellectual obsessed with his own stream
of consciousness rather than in catching the pulse of the nation. But
I've managed to explain the continuity of themes in Rodriguez's three
volumes only tangentially.

There is another take, summed up in three catchwords: class, ethnicity
and race. He appears to encourage this reading. The first installment is
about a low-income family whose child moves up in the hierarchy; the
second about the awakening to across-the-border roots; and the third
about "a tragic noun, a synonym for conflict and isolation," race. But
Rodriguez is quick to add:

race is not such a terrible word for me. Maybe because I am skeptical by
nature. Maybe because my nature is already mixed. The word race
encourages me to remember the influence of eroticism on history. For
that is what race memorializes. Within any discussion of race, there
lurks the possibility of romance.

So is this what the trilogy is about, finally? The endeavor strikes me
as rather mercurial. Because Rodriguez works extensively through
metaphor and hyperbole, future generations will read into his books what
they please, depending on the context. I still like Hunger of
Memory
the best. Days of Obligation strikes me as a
collection of disparate essays without a true core. And Brown is
a book that is not fully embracing, not least because it refuses to
recognize the complexity of Latinos in the United States. In it
Rodriguez describes his namesake, Richard Nixon, as "the dark father of
Hispanicity." "Surviving Chicanos (one still meets them) scorn the term
Hispanic," Rodriguez argues, "in part because it was Richard Nixon who
drafted the noun and who made the adjective uniform." A similar
reference was invoked in an Op-Ed piece by him in the New York
Times
, in which he declared George W. Bush the first Hispanic
President of the United States, the way Bill Clinton was the first black
President. Is this true? The argument developed is not always clear-cut:
It twists and turns, as we have by now come to expect. I've learned to
respect and admire Rodriguez. When I was a newly arrived immigrant in
New York City, I stumbled upon an essay of his and then read his first
book. I was mesmerized by the prose but found myself in strong
disagreement with its tenets, and we have corresponded about that in the
intervening years.

At any rate, where will Rodriguez go from here, now that the trilogy is
finished? Might he finally take a long journey overseas? Is his vision
of America finally complete? Not quite, I say, for the country is
changing rapidly. Mestizaje, he argues, is no longer the domain
of Latinos alone. We are all brown: dirty and impure. "This is not the
same as saying 'the poor shall inherit the earth' but is possibly
related," Rodriguez states. "The poor shall overrun the earth. Or the
brown shall." This is a statement for the history books. In his view,
America is about to become América--everyone in it a Hispanic, if
not physically, at least metaphorically. "American history books I read
as a boy were all about winning and losing," Rodriguez states in
"Peter's Avocado," the last of the nine essays in Brown. And with
a typical twist, continues, "One side won; the other side lost.... [But]
the stories that interested me were stories that seemed to lead off the
page: A South Carolina farmer married one of his slaves. The farmer
died. The ex-slave inherited her husband's chairs, horses, rugs, slaves.
And then what happened? Did it, in fact, happen?"

Blogs

The Roosevelts pumps PBS ratings, but that doesn’t make the network any less centrist.

September 19, 2014

The city becomes the first in the nation to say 'no' to bans on sex-selective abortions.

September 19, 2014

Corporal punishment is an extension of respectability politics, the idea that when people of color behave “correctly,” racism can be overcome.

September 19, 2014

"As long as there is no justice, there's no time for games."

September 19, 2014

The failure to protect workers is worsening an already dire public health crisis.

September 19, 2014

Scotland was closely divided on the issue of independence. But media cast a loud “no” vote.

September 19, 2014

Forty days after Officer Darren Wilson shot and killed Michael Brown, America’s “justice system” continues to fail.

September 18, 2014

Ntozake Shange, author of the groundbreaking choreopoem, for colored girls who have considered suicide when the rainbow is enuf, explains what the Ray Rice scandal means for black feminism.

September 18, 2014

A century and a half of writing on the always-possibly-imminent break-up of the United Kingdom.

September 18, 2014

If Scotland wakes up on Friday still bound to England, it will not be because solidarity or shared vision prevailed but rather fears of job loss, higher prices and higher taxes.

September 18, 2014