Help

Nation Topics - Society

Topic Page

Articles

News and Features

The FBI has come under harsh criticism in recent weeks for its failure
to act on information that might have enabled it to thwart the September
11 attacks. Rather than deny the criticism, FBI Director Robert Mueller
has embraced it (easy for him to do, since he didn't start on the job
until September 4) and then exploited it to argue that the bureau needs
more power, more resources and fewer restrictions.

Both the criticism and the remedy are misguided. The dots that everyone
now says should have been connected consist of a few leads spread over a
three-year period: a 1998 memo from an FBI agent in Oklahoma suspicious
about some Middle Eastern men taking flying lessons; a July 2001 memo
from a Phoenix agent speculating that Osama bin Laden could be sending
terrorists to flight schools here; and the August 2001 arrest of
Zacarias Moussaoui for acting suspiciously in flight school. Viewed in
hindsight, each points inexorably to September 11. But there is a world
of difference, as any gambler, stock trader or palm reader will tell
you, between perceiving the connections after and before the fact. On
September 10 these three bits of information competed for the FBI's
attention with thousands of other memos, leads and suspicious events
pointing in thousands of other directions. We are engaged in a
nationwide session of Monday-morning quarterbacking.

The remedy is worse. Shifting resources to fighting terrorist threats
makes sense, but freeing the FBI from the minimal restrictions it has
operated under in the past does not. The guidelines governing the FBI's
domestic criminal investigations, which do not even apply to
international terrorism investigations, had nothing to do with the FBI
missing the September 11 plot. And it is likely that the changes in the
guidelines announced by Attorney General John Ashcroft will actually
reduce the FBI's effectiveness in fighting terrorism.

The old guidelines were sparked by revelations that in the 1960s and
'70s, the FBI's COINTELPRO initiative targeted perfectly lawful antiwar,
environmental, feminist and civil rights groups for widespread
monitoring, infiltration and disinformation. The guidelines sought to
remedy the FBI's proclivity for indulging in guilt by association and
conducting intrusive and sweeping investigations of political groups
without any criminal basis. They sought to focus the FBI on its mission,
which, contrary to popular perception, has always been to prevent as
well as to investigate crime.

But even under the guidelines abuses continued. One of the most
prominent involved an investigation of the Committee in Solidarity With
the People of El Salvador (CISPES) from 1983 to 1985. Under the rubric
of counterterrorism, the FBI monitored student rallies, infiltrated
meetings and identified attendees at CISPES events. In the end, the
bureau had collected information on 1,330 groups--including Oxfam
America, the US Catholic Conference and a Cincinnati order of nuns--but
no evidence of crime.

Such investigations are likely to be commonplace in the post-
September 11 era. Ashcroft's guidelines expressly permit the FBI to
conduct some investigations without even a shred of information about
potential criminal conduct. And Congress has so expanded the definition
of federal crimes that requiring a criminal basis is not enough to
forestall political spying. Federal antiterrorism laws of 1996 and 2001
now make it a crime to provide any associational support to foreign
groups we designate as terrorist, even if the support has no connection
whatever to terrorist activity. Under those laws, the CISPES
investigation would have been legal, on suspicion that CISPES was
supporting the Salvadoran rebel movement.

The combined effect of the expanded statute, loosened guidelines and
increased counterterrorism personnel at the FBI will be to bring in
exponentially more information about the populace than the FBI has ever
had. Some of the additional information obtained may, like the isolated
leads developed before September 11, be related to terrorist plots. But
those leads are almost certain to be drowned out by the barrage of
information about innocent political activity.

An intelligence expert on a recent panel with me claimed that what we
need now is "all-source intelligence fusion," meaning a group of
analysts sitting in a room analyzing mounds of data for trends and
patterns. Despite its techno-trendy title, all-source intelligence
fusion is no substitute for good relations with the affected
communities. If the FBI has information that the threat is likely to
stem from Arab sources, it should be building bridges to the millions of
law-abiding Arabs--instead of profiling Arab students without cause,
holding Middle Easterners without charges and selectively registering
all immigrants from Arab countries. You don't build bridges by
infiltrating and monitoring legitimate political and religious activity.

Let's say I'm a Jehovah's Witness, and I get a job in an understaffed
emergency room where, following the dictates of my conscience, I refuse
to assist with blood transfusions and try my best to persuade my fellow
workers to do the same. How long do you think I'd last on the job? And
after my inevitable firing, how seriously do you think a jury would take
my claim that my rights had been violated? Five minutes and not very,
right? A similar fate would surely await the surgeon who converts to
Christian Science and decides to pray over his patients instead of
operating on them, the Muslim loan officer who refuses to charge
interest, the Southern Baptist psychotherapist who tells his Jewish
patients they're bound for hell. The law rightly requires employers to
respect employees' sincerely held religious beliefs, but not if those
beliefs really do prevent an employee from performing the job for which
she's been hired.

Change the subject to reproductive rights, though, and the picture gets
decidedly strange. In 1999 Michelle Diaz, a born-again Christian nurse
who had recently been hired by the Riverside Neighborhood Health Center,
a public clinic in Southern California, decided that emergency
contraception, the so-called morning after pill that acts to prevent
pregnancy if taken within seventy-two hours of unprotected intercourse,
was actually a method of abortion. She refused to dispense it or give
referrals to other providers; the clinic offered her a position that did
not involve reproductive healthcare, but when she told temporary nurses
at the clinic that they too would be performing abortions by dispensing
EC, Diaz, who was still on probation as a new hire, lost her job. She
sued with the help of the American Center for Law and Justice (ACLJ),
the religious-right law firm headed by Jay Sekulow. At the end of May a
jury agreed that her rights had been violated and awarded her $47,000.

Excuse me? A nurse at a public health clinic has the right to refuse to
provide patients with legally mandated services, give out misleading
health information in order to proselytize her co-workers to refuse as
well, and keep her job? The low-income women who come to Riverside
desperately in need of EC and abortion referrals are flat out of luck if
they happen to turn up when the anti-choicers are on shift? Riverside is
the largest public health clinic in the county, serving 150-200 patients
a day, but it operates with a staff of four nurses--should those four
people decide what services the clinic can offer? What about the
patient's right to receive standard medical care? Or the clinic's
responsibility to deliver the services for which they receive government
funds?

Some states, California among them, have "conscience laws," permitting
anti-choice healthworkers to refuse to be involved in abortions. EC,
however, is just a high dose of regular birth control pills that
prevents ovulation and implantation. It is not abortion, because until a
fertilized egg implants in the womb, the woman is not pregnant. A long
list of medical authorities--the American Medical Association, the
American Medical Women's Association, the American College of
Obstetricians and Gynecologists and Harvard Medical School--agree that
EC is not an abortifacient, and a 1989 California court decision itself
distinguishes abortion from EC. There are lots of mysteries about the
Diaz case, ranging from why Diaz took a job she knew involved practices
she found immoral in the first place, to how the jury could possibly
have come up with a decision so contrary to law and public policy. Did
Diaz take the job with the express intention of disrupting services? Was
the jury anti-choice? Interestingly, the jury pool was partly drawn from
San Bernardino County, which last year unsuccessfully tried to bar its
public health clinics from dispensing EC.

Whatever the jury's thinking, the Diaz case represents the latest of
numerous attempts by the anti-choice movement to equate EC with abortion
and move it out of normal medical practice. Pharmacists for Life
International, a worldwide organization that claims to have some 1,500
members, calls it "chemical abortion" and urges pharmacists to refuse to
dispense it. The ACLJ is currently litigating on behalf of one who did.
Wal-Mart refuses to stock it at all. Anti-choicers in Britain made an
unsuccessful attempt to prevent EC from being dispensed over the counter
by placing it under an archaic law that prohibits "procuring a
miscarriage." Some anti-choicers have long argued that not just EC but
conventional birth-control methods--the pill, Norplant, Depo-Provera and
the IUD--are "abortifacients": In northern Kentucky anti-choice
extremists are campaigning to force one local health board to reject
Title X family-planning funds; according to the Lexington
Herald-Leader, the board's vote, scheduled for June 19, is too
close to call.

Although secular employers are expected to make reasonable
accommodations to religious employees--or even, if the Diaz verdict is
upheld, unreasonable ones--religious employers are not required to
return the favor. On the contrary, the Supreme Court, in The Church
of Jesus Christ of Latter Day Saints v. Amos
, permits them to use
religious tests to hire and fire personnel as far from the sacred
mission as janitorial workers; if a Methodist church wants to refuse to
hire a Muslim security guard, it has the blessing of the Constitution to
do so. As often noted in this column, religious organizations can and do
fire employees who violate religious precepts on and even off the job. A
pro-choice nurse could not get a job at a Catholic hospital and declare
that her conscience required her to go against policy and hand out EC to
rape victims, or even tell them where to obtain it--even though medical
ethics oblige those who refuse to provide standard services for moral
reasons to give referrals, and even though Catholic hospitals typically
get about half of their revenue from the government.

According to the ACLJ, however, secular institutions should be sitting
ducks for any fanatic who can get hired even provisionally. The
Riverside clinic has asked the judge to set aside the Diaz verdict. If
that bid is unsuccessful, it will appeal. I'll let you know what
happens.

Unions are gradually making fuller use of the Internet's capacities to
improve communication with their own staffs or members. But increasingly
they are also using the web to recruit new members or to establish
"virtual communities" of union supporters in arenas not yet amenable to
the standard collective-bargaining model.

Alliance@IBM (www.allianceibm.org) is an example of an effective
Net-supported minority union, operating without a demonstrated pro-union
majority and without a collective-bargaining contract at a traditional
nonunion company. The alliance provides information and advice to
workers at IBM through the web. A similar effort at a partially
organized employer is WAGE ("Workers at GE," www.geworkersunited.org), which draws on contributions from fourteen cooperating
international unions. The Microsoft-inflected WashTech
(www.washtech.org) and the Australian IT Workers Alliance
(www.itworkers-alliance.org) are open-source unions that are closer to
craft unions or occupational associations. Both are responsive to the
distinctive professional needs of these workers, such as access to a
variety of job experiences and additional formal education, and the
portability of high-level benefits when changing jobs.

The National Writers Union (www.nwu.org), a UAW affiliate, is another
example of a union virtually created off the Net. It provides
information and advice--including extensive job postings--to members,
and it lobbies on their behalf, most spectacularly in the recent Supreme
Court decision it won on freelance worker copyright rights. But most of
its members work without a collectively bargained contract.

In Britain, UNISON (the largest union in the country) and the National
Union of Students have a website that tells student workers their rights
and gives them advice about how to deal with workplace problems
(www.troubleatwork.org.uk). It is a particularly engaging and practical
illustration of how concrete problems can be addressed through Net
assistance.

Finally, for a more geographically defined labor community, take a look
at the website of the King County AFL-CIO (www.kclc.org), the Seattle
central labor council that uses the Net to coordinate its own business,
bring community and labor groups together for discussion and common
action, post messages and general information to the broader community,
and otherwise create a "virtual" union hall with much of the spirit and
dense activity that used to be common in actual union halls in major
cities.

The SAT has been on the ropes lately. The University of California
system has threatened to quit using the test for its freshman
admissions, arguing that the exam has done more harm than good. The
State of Texas, responding to a federal court order prohibiting its
affirmative action efforts, has already significantly curtailed the
importance of the SAT as a gatekeeper to its campuses. Even usually
stodgy corporate types have started to beat up on the SAT. Last year,
for example, a prominent group of corporate leaders joined the National
Urban League in calling upon college and university presidents to quit
placing so much stock in standardized admissions tests like the SAT,
which they said were "inadequate and unreliable" gatekeepers to college.

Then again, if the SAT is anything, it's a survivor. The SAT
enterprise--consisting of its owner and sponsor, the College Board, and
the test's maker and distributor, the Educational Testing Service--has
gamely reinvented itself over the years in myriad superficial ways,
hedging against the occasional dust-up of bad public relations. The SAT,
for example, has undergone name changes over the years in an effort to
reflect the democratization of higher education in America and
consequent changes in our collective notions about equal opportunity.
But through it all, the SAT's underlying social function--as a sorting
device for entry into or, more likely, maintenance of American
elitehood--has remained ingeniously intact, a firmly rooted icon of
American notions about meritocracy.

Indeed, the one intangible characteristic of the SAT and other
admissions tests that the College Board would never want to change is
the virtual equation, in the public's mind, of test scores and academic
talent. Like the tobacco companies, ETS and the College Board (both are
legally nonprofit organizations that in many respects resemble
profit-making enterprises) put a cautionary label on the product.
Regarding their SAT, the organizations are obliged by professional codes
of proper test practices to inform users of standardized admissions
tests that the exams can be "useful" predictors of later success in
college, medical school or graduate school, when used in conjunction
with other factors, such as grades.

But the true place of admissions testing in America isn't always so
appropriate. Most clear-eyed Americans know that results on the SAT,
Graduate Record Exam or the Medical College Admission Test are widely viewed as synonymous with academic talent in higher education. Whether it's true or not--and there's lots of evidence that it's not--is quite beside the point.

Given the inordinate weight that test scores play in the American
version of meritocracy, it's no surprise that federal courts have been
hearing lawsuits from white, middle-class law school applicants
complaining they were denied admission to law school even though their
LSAT scores were fifty points greater than a minority applicant who was
admitted; why neoconservative doomsayers warn that the academic quality
of America's great universities will plummet if the hordes of unwashed
(read: low test scores) are allowed entry; why articles are written
under titles like "Backdoor Affirmative Action," arguing that
de-emphasizing test scores in Texas and California is merely a covert
tactic of public universities to beef up minority enrollments in
response to court bans on affirmative action.

Indeed, Rebecca Zwick, a professor of education at the University of
California, Santa Barbara, and a former researcher at the Educational
Testing Service, wrote that "Backdoor Affirmative Action" article for
Education Week in 1999, implying that do-gooders who place less
emphasis on test scores in order to raise minority enrollments are
simply blaming the messenger. And so it should not be surprising that
the same author would provide an energetic defense of the SAT and
similar exams in her new book, Fair Game? The Use of Standardized
Admissions Tests in Higher Education.

Those, like Zwick, who are wedded to the belief that test scores are
synonymous with academic merit will like this concise book. They will
praise its 189 pages of text as, finally, a fair and balanced
demystification of the esoteric world of standardized testing. Zwick and
her publisher are positioning the book as the steady, guiding hand
occupying the sensible middle ground in an emotional debate that they
claim is dominated by journalists and other uninformed critics who don't
understand the complex subject of standardized testing. "All too
often...discussions of testing rely more on politics or emotion than on
fact," Zwick says in her preface. "This book was written with the aim of
equipping contestants in the inevitable public debates with some solid
information about testing."

If only it were true. Far from reflecting the balanced approach the
author claims, the book is thinly disguised advocacy for the status quo
and a defense of the hegemony of gatekeeping exams for college and
university admissions. It could be more accurately titled (without the
bothersome question mark) "Fair Game: Why America Needs the SAT."

As it stands, the research staff of the College Board and the
Educational Testing Service, Zwick's former employer, might as well have
written this book, as she trots out all the standard arguments those organizations have used for years to show why healthy doses of standardized testing are really good for American education. At almost every opportunity, Zwick quotes an ETS or College Board study in the most favorable light, couching it as the final word on a particular issue, while casting aspersion on
other studies and researchers (whose livelihoods don't depend on selling
tests) that might well draw different conclusions. Too often Zwick
provides readers who might be unfamiliar with the research about testing
with an overly simplistic and superficial treatment. At worst, she
leaves readers with grossly misleading impressions.

After providing a quick and dirty account of IQ testing at the turn of
the last century, a history that included the rabidly eugenic beliefs of
many of the early testmakers and advocates in Britain and the United
States ("as test critics like to point out," Zwick sneers), the author
introduces readers to one of the central ideologies of mental testing to
sort a society's young for opportunities for higher education. Sure,
mental testing has brought some embarrassing moments in history that we
moderns frown on nowadays, but the testing movement has had its good
guys too. Rather than being a tool to promote and protect the interests
of a society's most privileged citizens, the cold objectivity of
standardized testing remains an important goal for exercise of
democratic values.

According to this belief, standardized testing for admission to college
serves the interest of meritocracy, in which people are allowed to shine
by their wits, not their social connections. That same ideology, says
Zwick, drove former Harvard president James Bryant Conant, whom Zwick
describes as a "staunch supporter of equal opportunity," in his quest to
establish a single entrance exam, the SAT, for all colleges. Conant, of
course, would become the first chairman of the board of the newly formed
Educational Testing Service. But, as Nicholas Lemann writes in his 1999
book The Big Test: The Secret History of the American
Meritocracy
, Conant wasn't nearly so interested in widening
opportunity to higher education as Zwick might think. Conant was keen on
expanding opportunity, but, as Lemann says, only for "members of a tiny
cohort of intellectually gifted men." Disillusioned only with the form
of elitism that had taken shape at Harvard and other Ivy League
colleges, which allotted opportunities based on wealth and parentage,
Conant was nevertheless a staunch elitist, an admirer of the
Jeffersonian ideal of a "natural aristocracy." In Conant's perfect
world, access to this new kind of elitehood would be apportioned not by
birthright but by performance on aptitude tests. Hence the SAT, Lemann
writes, "would finally make possible the creation of a natural
aristocracy."

The longstanding belief that high-stakes mental tests are the great
equalizer of society is dubious at best, and at worst a clever piece of
propaganda that has well served the interests of American elites. In
fact, Alfred Binet himself--among the fathers of IQ testing, who would
invent the first version of the Stanford-Binet intelligence test, the
precursor to the modern SAT--observed the powerful relationship between
one's performance on his so-called intelligence test and a child's
social class, a phenomenon Binet described in his 1916 book The
Development of Intelligence in Children.

And it's the same old story with the SAT. Look at the college-bound high
school seniors of 2001 who took the SAT, and the odds are still firmly
stacked against young people of modest economic backgrounds' beating the
SAT odds. A test-taker whose parents did not complete high school can
expect to score fully 171 points below the SAT average, College Board
figures show. On the other hand, high schoolers whose moms and dads have
graduate degrees can expect to outperform the SAT average by 106 points.

What's more, the gaps in SAT performance between whites and blacks and
between whites and Mexican-Americans have only ballooned in the past ten
years. The gap between white and black test-takers widened five points
and eleven points on the SAT verbal and math sections, respectively,
between 1991 and 2001. SAT score gaps between whites and
Mexican-Americans surged a total of thirty-three points during that same
period.

For critics of the national testing culture, such facts are troubling
indeed, suggestive of a large web of inequity that permeates society and
the educational opportunities distributed neatly along class and race
lines, from preschool through medical school. But for Zwick, the notion
of fairness when applied to standardized admissions tests boils down to
a relatively obscure but standard procedure in her field of
"psychometrics," which is in part the study of the statistical
properties of standardized tests.

Mere differences in average test scores between most minority groups and
whites or among social classes isn't all that interesting to Zwick. More
interesting, she maintains, is the comparative accuracy of test scores
in predicting university grades between whites and other racial groups.
In this light, she says, the SAT and most standardized admissions tests
are not biased against blacks, Latinos or Native Americans. In fact, she
says, drawing on 1985 data from a College Board study that looked at
forty-five colleges, those minority groups earned lower grades in
college than predicted by their SAT scores--a classic case of
"overprediction" that substantiates the College Board claim that the SAT
is more than fair to American minorities. By contrast, if the SAT is
unfair to any group, it's unfair to whites and Asian-Americans, because
they get slightly better college grades than the SAT would predict,
Zwick suggests.

Then there's the odd circumstance when it comes to standardized
admissions tests and women. A number of large studies of women and
testing at the University of California, Berkeley, the University of
Michigan and other institutions have consistently shown that while women
(on average) don't perform as well on standardized tests as male
test-takers do, women do better than men in actual classroom work.
Indeed, Zwick acknowledges that standardized tests, unlike for most
minority groups, tend to "underpredict" the actual academic performance
of women.

But on this question, as with so many others in her book, Zwick's
presentation is thin, more textbookish than the thorough examination and
analysis her more demanding readers would expect. Zwick glosses over a
whole literature on how the choice of test format, such as
multiple-choice versus essay examinations, rewards some types of
cognitive approaches and punishes others. For example, there's evidence
to suggest that SAT-type tests dominated by multiple-choice formats
reward speed, risk-taking and other surface-level "gaming" strategies
that may be more characteristic of males than of females. Women and
girls may tend to approach problems somewhat more carefully, slowly and
thoroughly--cognitive traits that serve them well in the real world of
classrooms and work--but hinder their standardized test performance
compared with that of males.

Beyond Zwick's question of whether the SAT and other admissions tests
are biased against women or people of color is the perhaps more basic
question of whether these tests are worthwhile predictors of academic
performance for all students. Indeed, the ETS and the College Board sell
the SAT on the rather narrow promise that it helps colleges predict
freshman grades, period. On this issue, Zwick's presentation is not a
little pedantic, seeming to paint anyone who doesn't claim to be a
psychometrician as a statistical babe in the woods. Zwick quotes the
results of a College Board study published in 1994 finding that one's
SAT score by itself accounts for about 13 percent of the differences in
freshman grades; that one's high school grade average is a slightly
better predictor of college grades, accounting for about 15 percent of
the grade differences among freshmen; and that the SAT combined with
high school grades is a better predictor than the use of grades alone.
In other words, it's the standard College Board line that the SAT is
"useful" when used with other factors in predicting freshman grades. (It
should be noted that Zwick, consistent with virtually all College Board
and ETS presentations, reports her correlation statistics without
converting them into what's known as "R-squared" figures. In my view,
the latter statistics provide readers with a common-sense understanding
of the relative powers of high school grades and test scores in
predicting college grades. I have made those conversions for readers in
the statistics quoted above.)

Unfortunately, Zwick misrepresents the real point that test critics make
on the question of predictive validity of tests like the SAT. The
salient issue is whether the small extra gains in predicting freshman
grades that the SAT might afford individual colleges outweigh the social
and economic costs of the entire admissions testing enterprise, costs
borne by individual test-takers and society at large.

Even on the narrow question of the usefulness of the SAT to individual
colleges, Zwick does not adequately answer what's perhaps the single
most devastating critique of the SAT. For example, in the 1988 book
The Case Against the SAT, James Crouse and Dale Trusheim argued
compellingly that the SAT is, for all practical purposes, useless to
colleges. They showed, for example, that if a college wanted to maximize
the number of freshmen who would earn a grade-point average of at least
2.5, then the admissions office's use of high school rank alone as the
primary screening tool would result in 62.2 percent "correct"
admissions. Adding the SAT score would improve the rate of correct
decisions by only about 2 in 100. The researchers also showed,
remarkably, that if the admissions objective is broader, such as
optimizing the rate of bachelor's degree completion for those earning
grade averages of at least 2.5, the use of high school rank by itself
would yield a slightly better rate of prediction than if the SAT scores
were added to the mix, rendering the SAT counterproductive. "From a
practical viewpoint, most colleges could ignore their applicants' SAT
score reports when they make decisions without appreciably altering the
academic performance and the graduation rates of students they admit,"
Crouse and Trusheim concluded.

At least two relatively well-known cases of colleges at opposite ends of
the public-private spectrum, which have done exactly as Crouse and
Trusheim suggest, powerfully illustrate the point. Consider the
University of Texas system, which was compelled by a 1996 federal
appeals court order, the Hopwood decision, to dismantle its
affirmative-action admissions programs. The Texas legislature responded
to the threat of diminished diversity at its campuses with the "top 10
percent plan," requiring public universities to admit any student
graduating in the top 10 percent of her high school class, regardless of
SAT scores.

Zwick, of course, is obliged in a book of this type to mention the Texas
experience. But she does so disparagingly and without providing her
readers with the most salient details on the policy's effects in terms
of racial diversity and the academic performance of students. Consider
the diversity question. While some progressives might have first
recoiled at the new policy as itself an attack on affirmative action,
that has not been the case. In fact, at the University of Texas at
Austin, the racial diversity of freshman classes has been restored to
pre-Hopwood levels, after taking an initial hit. Indeed, the
percentage of white students at Austin reached a historic low point in
2001, at 61 percent. What's more, the number of high schools sending
students to the state's flagship campus at Austin has significantly
broadened. The "new senders" to the university include more inner-city
schools in Dallas, Houston and San Antonio, as well as more rural
schools than in the past, according to research by UT history professor
David Montejano, among the plan's designers.

But the policy's impact on academic performance at the university might
be even more compelling, since that is the point upon which
neoconservative critics have been most vociferous in their condemnations
of such "backdoor" affirmative action plans that put less weight on test
scores. A December 1999 editorial in The New Republic typified
this road-to-ruin fiction: Alleging that the Texas plan and others like
it come "at the cost of dramatically lowering the academic
qualifications of entering freshmen," the TNR editorial warned,
these policies are "a recipe for the destruction of America's great
public universities."

Zwick, too, neglects to mention the facts about academic performance of
the "top 10 percenters" at the University of Texas, who have proven the
dire warnings to be groundless. At every SAT score interval, from less
than 900 to scores of 1,500 and higher, in the year 2000, students
admitted without regard to their SAT score earned better grades than
their non-top 10 percent counterparts, according to the university's
latest research report on the policy.

Or, consider that the top 10 percenters average a GPA of 3.12 as
freshmen. Their SAT average was about 1,145, fully 200 points lower than
non-top 10 percent students, who earned slightly lower GPAs of 3.07. In
fact, the grade average of 3.12 for the automatically admitted students
with moderate SAT scores was equal to the grade average of non-top 10
percenters coming in with SATs of 1,500 and higher. The same pattern has
held across the board, and for all ethnic groups.

Bates College in Lewiston, Maine, is one case of a college that seemed
to anticipate the message of the Crouse and Trusheim research. Bates ran
its own numbers and found that the SAT was simply not a sufficiently
adequate predictor of academic success for many students and abandoned
the test as an entry requirement several years ago. Other highly
selective institutions have similar stories to tell, but Bates serves to
illustrate. In dropping the SAT mandate, the college now gives students
a choice of submitting SATs or not. But it permits no choice in
requiring that students submit a detailed portfolio of their actual work
and accomplishments while in high school for evaluation, an admissions
process completed not just by admissions staff but by the entire Bates
faculty.

As with the Texas automatic admission plan, Zwick would have been
negligent not to mention the case of Bates, and she does so in her
second chapter; but it's an incomplete and skewed account. Zwick quotes
William Hiss, the former dean of admissions at Bates, in a 1993
interview in which he suggests that the Bates experience, while perhaps
appropriate for a smaller liberal arts college, probably couldn't be
duplicated at large public universities. That quote well serves Zwick's
thesis that the SAT is a bureaucratically convenient way to maintain
academic quality at public institutions like UT-Austin and the
University of California. "With the capability to conduct an intensive
review of applications and the freedom to consider students' ethnic and
racial backgrounds, these liberal arts colleges are more likely than
large university systems to succeed in fostering diversity while toeing
the line on academic quality," Zwick writes.

But Zwick neglects to mention that Hiss has since disavowed his caveats
about Bates's lessons for larger public universities. In fact, Hiss, now
a senior administrator at the college, becomes palpably irritated at
inequalities built into admissions systems that put too much stock in
mental testing. He told me in a late 1998 interview, "There are twenty
different ways you can dramatically open up the system, and if you
really want to, you'll figure out a way. And don't complain to me about
the cost, that we can't afford it."

Zwick punctuates her brief discussion of Bates and other institutions
that have dropped the SAT requirement by quoting from an October 30,
2000, article, also in The New Republic, that purportedly
revealed the "dirty little secret" on why Bates and other colleges have
abandoned the SAT. The piece cleverly observed that because SAT
submitters tend to have higher test scores than nonsubmitters, dropping
the SAT has the added statistical quirk of boosting SAT averages in
U.S. News & World Report's coveted college rankings. That
statistical anomaly was the smoking gun the TNR reporter needed
to "prove" the conspiracy.

But to anyone who has seriously researched the rationales colleges have
used in dropping the SAT, the TNR piece was a silly bit of
reporting. At Bates, as at the University of Texas, the SAT
"nonsubmitters" have performed as well or better academically than
students who submitted SATs, often with scores hundreds of points lower
than the SAT submitters. But readers of Fair Game? wouldn't know
this.

One could go on citing many more cases in which Zwick misleads her
readers through lopsided reporting and superficial analysis, such as her
statements that the Graduate Record Exam is about as good a predictor of
graduate school success as the SAT is for college freshmen (it's not,
far from it), or her overly optimistic spin on the results of many
studies showing poor correlations between standardized test scores and
later career successes.

Finally, Zwick's presentation might have benefited from a less
textbookish style, with more enriching details and concrete examples.
Instead, she tries to position herself as a "just the facts" professor
who won't burden readers with extraneous contextual details or accounts
of the human side of the testing culture. But like the enormously
successful--at least in commercial terms--standardized tests themselves,
which promote the entrenched belief in American society that genuine
learning and expert knowledge are tantamount to success on Who Wants
to Be a Millionaire
-type multiple-choice questions, books like Fair Game? might be the standardized account that some readers really want.

Speech to The Democratic National Committee--Western Caucus
Saturday, May 25, 2002
Seattle, Washington

Did you know that the mere act of asking what kind of warning members of
the Bush Administration may have received about a 9/11-like attack is
just clever hype by that sneaky liberal media conspiracy? So goes the
argument of the regular National Review seat on Communist News
Network liberal media program, Reliable Sources. Recently, host
(and Washington Post media reporter) Howard Kurtz decided to fill
the chair not with his favorite guest/source, NR editor Rich Lowry, or the much-invited NR
Online
editor, Jonah Goldberg, but with the relatively obscure
NR managing editor, Jay Nordlinger. Nordlinger explained, "The
story is surprisingly slight," blown up by a liberal media fearing Bush
was getting "a free ride." Give the man points for consistency. The Bush
White House's exploitation of 9/11 to fatten Republican coffers via the
sale of the President's photo that fateful day--scurrying from safe
location to safe location--was also, in Nordlinger's view, "another
almost nonstory."

Nordlinger's complaint echoed the even stronger contention of another
Kurtz favorite, Andrew Sullivan. The world-famous
gaycatholictorygapmodel took the amazing position that potential
warnings about a terrorist threat that would kill thousands and land us
in Afghanistan was "not a story" at all. Sounding like a Karl Rove/Mary
Matalin love child, Sullivan contended, "The real story here is the
press and the Democrats' need for a story about the war to change the
climate of support for the President."

But Sullivan at least deserves our admiration for expertly spinning
Kurtz regarding The New York Times Magazine's decision to cut him
loose. Echoing Sullivan's PR campaign--and with a supportive quote from,
uh, Rich Lowry--Kurtz framed the story entirely as one of Times
executive editor Howell Raines avenging Sullivan's obsessive attacks on
the paper's liberal bias. OK, perhaps the standards for a Post
writer tweaking the Times top dog are not those of, say, Robert
Caro on Robert Moses, but where's the evidence that Raines was even
involved? The paper had plenty of reasons to lose Sullivan even if his
stupendously narcissistic website never existed. Sullivan's Times
work may have been better disciplined than his "TRB" columns in the
notsoliberal New Republic (before he was replaced by editor Peter
Beinart) and certainly than the nonsense he posts online, but it still
must have embarrassed the Newspaper of Record. As (now Times Book
Review
columnist) Judith Shulevitz pointed out in a critique of his
"dangerously misleading" paean to testosterone, Sullivan was permitted
to "mix up his subjective reactions with laboratory work." Stanford
neurobiologist Robert Sapolsky told Shulevitz at the time, Sullivan "is
entitled to his fairly nonscientific opinion, but I'm astonished at the
New York Times." The Andrew Sullivan Principles of Pre-Emptive
Sexual Disclosure also embarrassed the magazine when he used its pages
to out as gay two Clinton Cabinet members and liberal Democrats like
Rosie O'Donnell. (I imagine he came to regret this invasion of privacy
when his own life became tabloid fare.) Meanwhile, Sullivan's
McCarthyite London Sunday Times column about September 11--in
which he waxed hysterical about the alleged danger of a pro-terrorist
"Fifth Column" located in the very city that suffered the attack--should
have been enough to put off any discerning editor forever. Yet the myth
of his martyrdom continues. Sullivan's website carries the vainglorious
moniker "unfit to print." For once, he's right.

* * *

Sorry, I know enough can be more than enough, but this quote of Sully's
is irresistible: "I ignored Geoffrey Nunberg's piece in The American
Prospect
in April, debunking the notion of liberal media bias by
numbers, because it so flew in the face of what I knew that I figured
something had to be wrong." When a conservative pundit "knows" something
to be true, don't go hassling him with contrary evidence. It so happens
that linguist Geoffrey Nunberg did the necessary heavy lifting to
disprove perhaps the one contention in Bernard Goldberg's book
Bias the so-called liberal media felt compelled--perhaps out of
misplaced generosity--to accept: that the media tend to label
conservatives as such more frequently than alleged liberals. Tom
Goldstein bought into it in Columbia Journalism Review. So did
Jonathan Chait in TNR. Howard Kurtz and Jeff Greenfield let it go
unchallenged on Communist News Network. Meanwhile, Goldberg admits to
"knowing," Sullivan style, happily ignorant of any relevant data beyond
his own biases. He did no research, he says, because he did not want his
book "to be written from a social scientist point of view."

Unfortunately for Bernie, Nunberg discovered that alleged liberals are
actually labeled as such by mainstream journalists more frequently than
are conservatives. This is true for politicians, for actors, for
lawyers, for everyone--even institutions like think tanks and pressure
groups. The reasons for this are open to speculation, but Nunberg has
the numbers. A weblogger named Edward Boyd ran his own set of numbers
that came out differently, but Nunberg effectively disposed of Boyd's
(honest) errors in a follow-up article for TAP Online. In a truly
bizarre Village Voice column, Nat Hentoff recently sought to ally
himself with the pixilated Goldberg but felt a need to add the
qualifier, "The merits of Goldberg's book aside..." Actually, it's no
qualifier at all. Goldberg's worthless book has only one merit, which
was to inspire my own forthcoming book refuting it. (Hentoff
mischaracterizes that, too.) Meanwhile, the merits of Hentoff's column
aside, it's a great column.

* * *

Speaking of ex-leftists, what's up with Christopher Hitchens calling
Todd Gitlin and me "incurable liberals"? Since when is liberalism
treated as something akin to a disease in this, America's oldest
continuously published liberal magazine? Here's hoping my old friend
gets some treatment for his worsening case of incurable Horowitzism. (Or
is it Sullivanism? Hentoffism? Is there a Doctor of Philosophy in the
house?)

Meanwhile, I've got a new weblog with more of this kind of thing at
www.altercation.msnbc.com. Check it every day, or the terrorists
win...

"Death Star," "Get Shorty," "Fat Boy"--the revelation of Enron's trading
schemes in California have turned the Enron scandals virulent again.
Just when the White House thought the disease was in remission and
relegated to the business pages, the California scams exposed more of a
still-metastasizing cancer of corporate corruption.

Internal Enron memos reveal that it and other companies preyed on
California's energy crisis, helping to manufacture shortages and using
sham trades to drive up prices. The somnambulant Federal Energy
Regulatory Commission (FERC)--headed by Pat Wood III, "Kenny Boy" Lay's
handpicked chairman--decided that its initial finding of no market
manipulation in California was inoperable and opened a broader
investigation. With stocks plummeting and lawsuits piling up, CEOs at
Dynegy and CMS Energy resigned, as did heads of trading at Reliant
Resources and CMS.

The Bush Administration was directly implicated as the White House's
Enron stonewall began to collapse. A reluctant Joseph Lieberman,
chairman of the Senate Governmental Affairs Committee, finally got
sufficient spine to issue subpoenas, stimulating the White House to
release more documents about its contacts with Enron. These showed that
the White House had lied to House investigators when it reported only
six contacts between Enron officials and the White House energy task
force. The incomplete White House submissions now admit four times that
number, with more surely to come.

Lay and the Enron executives were pressing Vice President Cheney not
only to influence the President's energy policy but also to oppose price
controls on electricity in California, even as they were gaming the
market. Cheney and Bush responded to their leading contributor by
publicly scorning price controls, while White House aides encouraged the
energy industry to organize an ad campaign in California against
controls. Cheney surely felt comfortable with Enron's shady side: As we
recently learned, when he was CEO of Halliburton and its profits were
declining, his accountants--the ubiquitous Arthur Andersen--suddenly
started counting as revenue a portion of payments that were in dispute,
without informing investors of the change.

The Administration has painted Enron as a business, not a political,
scandal. Now it is apparent that the scandal is political and
economic, showing the problems of a system with too little
accountability and too much corporate influence both in the White House
and on Capitol Hill. And with the United States having to import more
than $1 billion a day in capital to cover trade deficits, the scandals
are already a drag on investment, growth and jobs.

Neither the Administration, Congress nor the business lobby has yet
awakened to the perils. Bush retains as Army Secretary former Enron
executive Tom White, who claims no knowledge that his subsidiary was
involved in the sham trading schemes (although his own bonuses were
undoubtedly based in part on the inflated revenues that resulted). Big
Five accounting firms lobbyist Harvey Pitt remains head of the SEC, even
after repeatedly traducing elementary ethics by meeting privately with
representatives of companies under investigation by his agency. Wood
remains the head of FERC, even as legislators call on him to recuse
himself from the California investigation. Bush and House Republicans
continue to resist sensible reforms. The business and accounting lobby,
in a victory of ideology over common sense, has mobilized against
anything with teeth.

Beltway conventional wisdom dismisses the political fallout of the Enron
scandals. But Americans are furious at executives who betray their
workers and mislead small investors while plundering their companies.
Thus far their anger hasn't fixed on Washington, but it may if no one is
held accountable. It's long past time for Senate Democrats to rouse
themselves, demand the heads of White and Pitt and launch a scorching
public investigation of the Administration's complicity with Enron in
California and elsewhere. Any real reform will require displacing Enron
conservatives, with their mantra of "self-regulation" and their corrupt
politics of money. With the revelations continuing and elections coming
up, progressives should be mobilizing independently to name names,
exposing those who shield the powerful. If voters learn who the culprits
are, Enron may end up reflecting the "genius" not of capitalism but of
democracy--the people's ability to clean out the stables when the stench
gets too foul.

In the past two months I have talked with many people who have a keen
interest in whether the Senate will decide to ban therapeutic cloning.
At a conference at a Philadelphia hospital, a large number of people,
their bodies racked with tremors from Parkinson's disease, gathered to
hear me speak about the ethics of stem cell research. A few weeks
earlier I had spoken to another group, many of whom were breathing with
the assistance of oxygen tanks because they have a genetic disease,
Alpha-1 antitrypsin deficiency, that destroys their lungs and livers.
Earlier still I met with a group of parents whose children are paralyzed
as a result of spinal cord injuries.

At each meeting I told the audience there was a good chance that the
government would criminalize research that might find answers to their
ailments if it required using cloned human embryos, on the grounds that
research using such embryos is unethical. The audience members were
incredulous. And well they should have been. A bizarre alliance of
antiabortion religious zealots and technophobic neoconservatives along
with a smattering of scientifically befuddled antibiotech progressives
is pushing hard to insure that the Senate accords more moral concern to
cloned embryos in dishes than it does to kids who can't walk and
grandmothers who can't hold a fork or breathe.

Perhaps it should come as no surprise that George W. Bush and the House
of Representatives have already taken the position that any research
requiring the destruction of an embryo, cloned or otherwise, is wrong.
This view derives from the belief, held by many in the Republican camp,
that personhood begins at conception, that embryos are people and that
killing them to help other people is simply wrong. Although this view
about the moral status of embryos does not square with what is known
about them--science has shown that embryos require more than genes in
order to develop, that not all embryos have the capacity to become a
person and that not all conception begins a life--it at least has the
virtue of moral clarity.

But aside from those who see embryos as tiny people, such clarity of
moral vision is absent among cloning opponents. Consider the views of
Leon Kass, William Kristol, Charles Krauthammer and Francis Fukuyama.
Each says he opposes research involving the cloning of human embryos.
Each has been pushing furiously in the media and in policy circles to
make the case that nothing could be more morally heinous than harvesting
stem cells from such embryos. And each says that his repugnance at the
idea of cloning research has nothing to do with a religiously based view
of what an embryo is.

The core of the case against cloning for cures is that it involves the
creation, to quote the latest in a landslide of moral fulminations from
Krauthammer, "of a human embryo for the sole purpose of using it for its
parts...it will sanction the creation of an entire industry of embryo
manufacture whose explicit purpose is...dismemberment for research."
Sounds like a very grim business indeed--and some progressives, notably
Jeremy Rifkin and Norman Mailer, have sounded a similar alarm as they
have joined the anticloning crusade.

From the secular viewpoint, which Krauthammer and like-minded cloning
opponents claim to hold, there is no evidence for the position that
embryonic clones are persons or even potential persons. As a simple fact
of science, embryos that reside in dishes are going nowhere. The
potential to become anything requires a suitable environment. Talk of
"dismemberment," which implicitly confers moral status on embryos,
betrays the sort of faith-based thinking that Krauthammer says he wants
to eschew. Equally ill-informed is the notion that equivalent medical
benefits can be derived from research on adult stem cells; cloned
embryonic stem cells have unique properties that cannot be duplicated.

The idea that women could be transformed into commercial egg farms also
troubles Krauthammer, as well as some feminists and the Christian
Medical Association. The CMA estimates that to make embryonic stem-cell
cloning work, more than a billion eggs would have to be harvested. But
fortunately for those hoping for cures, the CMA is wrong: Needed now for
cloned embryonic stem-cell research are thousands of eggs, not billions.
While cloning people is a long shot, cloning embryos is not, and it
should be possible to get the research done either by paying women for
their eggs or asking those who suffer from a disease, or who know
someone they care about who has a disease, to donate them. Women are
already selling and donating eggs to others who are trying to have
babies. Women and men are also donating their kidneys, their bone marrow
and portions of their livers to help others, at far greater risk to
themselves than egg donation entails. And there is no reason that embryo
splitting, the technique used today in animals, could not provide the
requisite embryo and cloned stem-cell lines to treat all in need without
a big increase in voluntary egg donation from women.

In addition to conjuring up the frightening but unrealistic image of
women toiling in Dickensian embryo-cloning factories, those like
Krauthammer, who would leave so many senior citizens unable to move
their own bodies, offer two other moral thoughts. If we don't draw the
line at cloning for cures, there will soon enough be a clone moving into
your neighborhood; and besides, it is selfish and arrogant to seek to
alter our own genetic makeup to live longer.

The reality is that cloning has a terrible track record in making
embryos that can become fetuses, much less anything born alive. The most
recent review of cloning research shows an 85 percent failure rate in
getting cow embryos to develop into animals. And of those clones born
alive, a significant percentage, more than a third, have serious
life-threatening health problems. Cloned embryos have far less potential
than embryos created the old-fashioned way, or even frozen embryos, of
becoming anything except a ball of cells that can be tricked into
becoming other cells that can cure diseases. Where Krauthammer sees
cloned embryos as persons drawn and quartered for their organs, in
reality there exists merely a construct of a cell that has no potential
to become anything if it is kept safely in a dish and almost no
potential to develop even if it is put into a womb. Indeed, current work
on primate cloning has been so unproductive, which is to say none made
to date, that there is a growing sentiment in scientific circles that
human cloning for reproduction is impossible. The chance of anyone
cloning a full-fledged human is almost nil, but in any case there is no
reason that it cannot be stopped simply by banning the transfer of these
embryos into wombs.

But should we really be manipulating our genes to try to cure diseases
and live longer? Kass and Fukuyama, in various magazine pieces and
books, say no--that it is selfish and arrogant indulgence at its worst.
Humanity is not meant to play with its genes simply to live longer and
better.

Now, it can be dangerous to try to change genes. One young man is dead
because of an experiment in gene therapy at my medical school. But the
idea that genes are the defining essence of who we are and therefore
cannot be touched or manipulated recalls the rantings of Gen. Jack D.
Ripper in Doctor Strangelove, who wanted to preserve the
integrity of his precious bodily fluids. There's nothing inherently
morally wrong with trying to engineer cells, genes and even cloned
embryos to repair diseases and terminal illnesses. Coming from those who
type on computers, wear glasses, inject themselves with insulin, have
had an organ transplant, who walk with crutches or artificial joints or
who have used in vitro fertilization or neonatal intensive care to
create their children, talk of the inviolate essence of human nature and
repugnance at the "manufactured" posthuman is at best disingenuous.

The debate over human cloning and stem cell research has not been one of
this nation's finest moral hours. Pseudoscience, ideology and plain
fearmongering have been much in evidence. If the discussions were merely
academic, this would be merely unfortunate. They are not. The flimsy
case against cloning for cures is being brought to the White House, the
Senate and the American people as if the opponents hold the moral high
ground. They don't. The sick and the dying do. The Senate must keep its
moral priorities firmly in mind as the vote on banning therapeutic
cloning draws close.

One of the biggest problems Palestine's supporters face is
anti-Semitism.

Blogs

The country has made real progress toward reducing domestic violence—but VAWA can do more in ensuring economic security for survivors. 

September 12, 2014

The word “homeland,” he says, will “get us further into wars."

September 12, 2014

There’s debate over whether federal government involvement will make a difference in Ferguson. Cincinnati, site of a police shooting in 2002, offers clues on how to address racially biased policing.

September 12, 2014

In a speech for the ages, CBS sports broadcaster James Brown stands up to toxic masculinity.

September 12, 2014

New witnesses to the Michael Brown killing say he had his hands up when he was shot by police.

September 11, 2014

The Obama administration’s legal justification for expanding military action in Iraq and Syria is hypocritical and based on tenuous logic.

September 11, 2014

The release of a wrongfully imprisoned death row inmate has opened up a bipartisan conversation around punitive justice in North Carolina.

September 11, 2014

Eric on "The Beatles in Mono" and Reed on how the emphasis on optics skews our democratic priorities.

September 11, 2014

Has Roger Goodell been lying about whether or not anyone in his office had access to the elevator videotape of Ray Rice's assault of Janay Rice? A bombshell report by the AP makes this case.

September 10, 2014

Does it matter what necktie you wear when you declare war?

September 10, 2014