One of the many casualties of our national obsession with the war on
Iraq is the emerging crisis of America's public colleges and
In a nation that nominally eschews class distinctions as unbefitting our
supposed classlessness, whose elected officials decry any protest over
government largesse to the rich as "class warfare," real Americans--most
of whom are suckers, it turns out--spend untold amounts of time, cash
and effort obsessing on a
tiny number of elite colleges that really, really don't want the vast
majority of them as members.
Never mind, though. For an increasing number of baby boomer parents,
it's never too early to stick kids on the Harvard-
or-bust fast track. It starts with Mozart and Shakespeare in the crib,
and then it's off to the $8,000-a-year and up nursery school that admits
toddlers on the basis of IQ tests (performance on which is heavily
influenced by the educational attainment of the child's parents). The
proper nursery school inexorably leads to the high-powered kindergarten
and prep school and eventually to thousands of dollars more in fees for
college consultants and standardized testing tutors.
Before a child can say "meritocracy," he or she is embarking on an
overseas adventure to New Guinea that will lead, by design, to that
killer college application essay that wows admissions counselors from
Harvard, Yale or Princeton for its originality and sense of social and
democratic purpose, a tonier version of the Miss America contestant's
"I'm for world peace" speech.
If all the time and effort devoted to this enterprise were about a
child's or young person's love of learning, creativity and personal
development, I for one would be considerably less cynical. But the elite
college admissions game--under the near-tyrannical guidance of US News
& World Report's annual ranking of the nation's "best" colleges--is
all too often about the pursuit of prestige at almost any cost, a game
that perpetuates the big lie that one can't find a decent education at
anything less than a Brand Name school.
I was excited to read Jacques Steinberg's new book about elite college
admissions, The Gatekeepers, anticipating a breath of fresh air on the
subject from the New York Times education reporter. As he introduces
himself and his book, we learn that this son of a Massachusetts
anesthesiologist sees himself as a sort of accidental alumnus of the Ivy
League, who pleads ignorance as to how he got admitted to Dartmouth in
the early 1980s. But he obviously owes a lot to his very assertive mom,
a former nurse, who on the family's exploratory visit to the Dartmouth
campus grabbed her son by the collar after an admissions officer's spiel
and strode to the front of the room to magisterially inform the
official, "We're the Steinbergs." The rest, as they say, is history.
Steinberg strikes me as a lucky man indeed. After joining the Times and
becoming a national education correspondent, he attended the 1999
conference of the National Association of College Admission Counseling
in Orlando, Florida. While there, he was approached by William Hiss, an
administrator at Bates College in Lewiston, Maine. Hiss wondered whether
Steinberg would like exclusive access to the selective college's
admissions process, noteworthy in that it does not require applicants to
submit SAT or ACT scores. Although Steinberg and his editor, Ethan
Bronner, were intrigued by the idea, they declined Hiss's offer in favor
of a less "anomalous" college--i.e., one that continued to rely on
gatekeeping tests like the SAT.
After being turned down by several colleges for the kind of exclusive,
total-access deal the Times wanted, Steinberg found what would seem a
perfect match. At Wesleyan, located in Middletown, Connecticut, midway
between Hartford and New Haven, college officials agreed to provide the
reporter unfettered access to its admissions process from fall 1999 to
spring 2000, culminating in the Times's series of articles upon which
The Gatekeepers is based. Wesleyan agreed not to meddle in Steinberg's
stories, gave him access to individual students and their families and
allowed him to observe any and all meetings in its admissions
deliberations--in other words, a reporter's dream assignment. (It
couldn't have hurt Steinberg's cause that his boss, Bronner, graduated
from Wesleyan in 1976, as one discovers in the book's acknowledgments.)
It's all very cozy and well connected in these pages, with lucky people
and impressive degrees from prestigious institutions to spare. When we
meet Steinberg's featured "gatekeeper," a Wesleyan admissions officer
named Ralph Figueroa, a Los Angeles native who ends up in Middletown
after a stint working admissions at Occidental College in LA, I'm
thinking, cool choice. This ought to be interesting, a Mexican-American
man with a working-class background (the rebel in me hopes), now an
insider shaking things up at one elite private college in comfy New
Instead, we learn that the 34-year-old Figueroa's dad was a lawyer and
graduate of Loyola Law School; that his mom earned a master's degree in
education, and became a mover and shaker in an organization called
Expanded Horizons, a nationally recognized program (held in high regard
by Ronald Reagan and his Education Secretary, Terrel Bell) that helped
Mexican-American kids prepare for college. The family frequently took
their children on trips to colleges like Pomona, Occidental and Caltech.
The grooming and preparation paid off for the Figueroa clan. Ralph
graduated from Stanford--he turned down Harvard, Yale and Princeton--and
went on to UCLA Law. His several siblings also attended elite schools,
including UCLA Law and Stanford Law, and one sister, like himself, would
find a niche in admissions at Caltech.
As if adopting the same mesmerizing tricks as the colleges themselves,
holding out the impossible dream of an elite college education to the
masses in order to up their application counts (which improves
selectivity rankings), Steinberg and his publisher pitch this book as
"required reading for every parent of a high school age child and for
every student" who is applying to college. But it's easy to imagine
ordinary parents and their kids--the overwhelming majority of whom
attend ordinary public high schools that aren't even remotely on the map
of "feeder" schools highly regarded by elite colleges--being completely
intimidated by this book. I could scarcely find one person in these
pages, whether an admissions officer or student, whose parents weren't
at least modestly well educated or who didn't have some connection to
either a brand-name college or elite prep school. Most of the admissions
officers at Wesleyan were either Wesleyan grads or had connections to
other elite schools (a fairly common trait, from what I can tell, among
the admissions staffs at elite private colleges). In fact, I was able to
find just one student in Steinberg's world whose parents had not
attended college, a most admirable young New Yorker named Aggie. But
even she managed to find her way out of a downtrodden public school in
New York City to the Oldfields School, a venerable girls' prep school in
But let's be real. Readers of this book will more likely be the
well-educated parents and high-flying students who do attend schools
that are "on the map," and for whom prestigious colleges and personal
connections to those schools are all part of the entitlement package;
people for whom "state university" is a dirty word. And though Steinberg
is skillful at telling the stories of Ralph and a handful of young
people who apply to Wesleyan and other highly ranked colleges, I can
easily imagine sophisticated readers sighing a collective, "So what?"
There's very little in Steinberg's highly detailed narrative that such
readers won't already have surmised about the competitive admissions
When highly selective colleges talk about their admissions process to
prospective students, they like to convey the notion that there are no
formulas, no tricks, no standard combination of grades or test scores
that will insure one's admission. It's standard advice that Steinberg,
who calls the process "messy," would undoubtedly agree with. True, there
may be no magic formulas, but colleges like Wesleyan do pass their
judgments about individuals under some mighty formulaic parameters.
Readers probably won't be surprised to learn that Wesleyan admissions
officials watch their ranking in US News & World Report like nuclear
plant operators monitoring reactor heat levels. In fact, Steinberg
describes one seasoned admissions officer, Greg Pyke, whose task is to
keep running tabs on median SAT levels and other indicators of the
admitted class important to US News, in order to insure that the college
improves upon its previous year ranking.
The most revealing aspects of the process can be gleaned between the
lines of Steinberg's account. For example, many students and parents who
buy into this game have long known that test scores play a very
important, if not decisive, role in it. Recent surveys by the National
Association of College Admission Counseling confirm this. According to
NACAC's December 2001 survey, fully 86 percent of admissions officials
rated test scores as of either considerable or moderate importance, just
slightly below the importance the gatekeepers attach to grades in
college prep courses (89 percent).
As competition for admission has intensified and acceptance rates have
declined at elite private colleges in recent years, the weight attached
to gatekeeping tests has also increased, according to a recent report by
the Association for Institutional Research. Meanwhile, private colleges
have soured on high school grades, arguably a more egalitarian indicator
of merit and once the most important criterion in admissions, this
despite the well-known correlation between SAT scores and the
educational and income levels of one's parents.
Steinberg, like the admissions officers who are his subjects, is rarely
as explicit about these matters as the data presented in those surveys.
But parents and kids who know the game won't bat an eye at how heavily
colleges rely on gatekeeping tests, their claims to the contrary
notwithstanding. For example, Wesleyan admissions officers seem to think
that a 50- or 100-point difference in SAT scores among two candidates
means something significant about their future academic performance in
college, a patently false use of test scores. Steinberg, ever
nonjudgmental, allows such assumptions to pass virtually unchallenged,
although they have been powerfully refuted in numerous studies. Bates,
the SAT-optional college that first approached Steinberg, discovered no
differences between the academic performance of Bates students who
declined to submit SAT scores when applying, and that of SAT-submitters,
whose test scores were, on average, 160 points higher.
Deeply ingrained beliefs in the power of cognitive screens like the SAT
and about the importance of good grades in AP courses were not the only
things at the top of Wesleyan's gatekeeping criteria. There were two
additional ones, earmarked by a manila folder. "If an applicant was the
child of an alumna or alumnus, a dark orange square was added,"
Steinberg writes. "If an applicant had identified him- or herself as a
member of a minority group, a yellow circle was added. These details
were considered too important for a reader to overlook, and the coding
system was designed to ensure that they were given due attention."
Within these strictures Wesleyan's gatekeepers exercised a small degree
of wiggle room, and Steinberg does his best work describing the
difficult process of selecting a class of some 700 students from about
7,000 applications. Grateful, perhaps, for the access Wesleyan gave him,
he writes admiringly of the gatekeepers' studious commitment to be fair
and objective. But parents with high-school-age children are likely to
be appalled at the inconsistencies, and even arbitrary nature, of some
of the judgments made by Figueroa and his colleagues. The SAT, for
instance, which is often described by admissions officials, the College
Board and the Educational Testing Service as a "common yardstick," looks
more like a magic stick out of Alice in Wonderland, meaning whatever
Wesleyan's gatekeepers want it to mean, depending on whether the
applicant is a member of a minority group, an athlete or a member of the
Wesleyan "family." Isn't meritocracy grand?
Meanwhile, Andrew Fairbanks, a former Wesleyan admissions official, has
given us a very different account of elite college admissions, in a book
written with Christopher Avery and Richard Zeckhauser, both professors
at Harvard's Kennedy School of Government. While Steinberg uses
character and nar-
rative to reveal the inner workings of one college's admissions process,
the authors of The Early Admissions Game: Joining the Elite seek to
expose this often-deceitful and manipulated game in order to make it
more fair to all comers. Indeed, they say they hope to arm more students
and parents with information on how the game is played, and therefore
help to reduce the unfair advantages the present system affords
well-connected and affluent students. Although the book is focused on a
detailed investigation of early admissions programs, its reach is far
broader, if only because early admissions has become such a key element
of competitive college-recruitment efforts in recent years. As one
student who was recently admitted to Harvard told the authors, "That's
just how you apply to Harvard."
Although the writing lacks the journalistic polish of Steinberg's
account, and although the organization is at times disjointed, readers
seeking solid information about elite college admissions will find The
Early Admissions Game refreshingly frank. Other readers concerned about
restoring some equity to the process will also appreciate the book's
generosity of spirit and suggestions for reform.
The authors present a devastating portrait of elite college
admissions--and early admissions in particular--as an elaborate and
complicated "game" in the most literal meaning of that word, played by
colleges seeking competitive advantages over rivals, students seeking to
maximize their opportunities for entry into prestigious colleges and
school counselors striving to maintain the reputations of their "feeder"
schools in terms of their efficiency in placing students at highly
ranked colleges. As in all competitive games, the various players often
have little incentive to be forthcoming about their tactics and every
incentive to conceal strategic information from public view. Not
surprisingly, the authors suggest, the winners of the game tend to be
privileged students who have access to highly skilled counselors with
information pipelines to elite college admissions offices.
At the center of the book is a social scientific investigation that
makes powerful analytical use of admissions data at elite colleges
spanning several years and including some 500,000 college applications,
which reveals a fascinating statistical portrait of early admissions.
(Early admissions programs include both "early decision" ones, which
permit just one early application and bind students to that college if
they are admitted, and "early action" programs, which allow multiple
applications and do not bind students to colleges that accept them
early.) In public, most institutions are quick to reassure students and
parents that there's no advantage to applying early as opposed to
waiting to throw one's hat into the "regular" admissions pool. But the
advantages afforded early applications are considerable.
Consider Princeton. One need only note the increasingly small number of
openings remaining from the regular admissions pool to see why many
students who don't walk on water might find it in their best interest to
apply early. Of the 2,000 students admitted in one recent year at
Princeton, for instance, only 500 had applied during the regular
admissions cycle. The rest were either early applicants or "hooked"
applicants (underrepresented minorities, athletes or children of
At Princeton, which runs an early decision program, the authors estimate
that while its acceptance rate from the regular applicant pool was
slightly below 20 percent, the college's acceptance rate for early
applicants ballooned to well over 50 percent. The same pattern held for
virtually all the highly selective colleges in the authors' study. At
Columbia, for example, more than seven in ten students who applied early
were admitted, compared with about three in ten students applying during
the regular period.
When colleges concede such glaring differences in their admissions
rates, they explain that early applicants tend to be more attractive
candidates in terms of test scores, grades and other factors. The
authors easily destroy this canard by comparing early and regular
admission rates for students with similar credentials. Applying early to
elite colleges, they demonstrate, produces the equivalent of a 100-point
SAT boost for early action applicants and a 190-point boost for early
decision applicants. For the time-strapped student oddsmaker, the game
presents some interesting choices. Spend $1,000 on an SAT prep course,
or apply early? "Which is easier?" the authors ask. "To submit an early
application? Or to master the trombone to the level of all-state
orchestra or become a semifinalist in the Westinghouse Science
So what's in it for the colleges? Why give early decision applicants the
equivalent of nearly 200 points on the SAT? Part of the answer, it
seems, is that they have an Enron problem. The unfortunate fact of elite
college admissions in the era of US News & World Report is that the
magazine's annual ranking of the nation's best colleges now rules this
marketplace with an iron fist. The magazine operates under the fiction
that college quality is tantamount to median SAT scores, acceptance
rates and other more arcane measures such as "yield" rates, defined as
the percentage of the admitted students who decide to enroll--which
might be more accurately dubbed the "prestige index." In any case,
colleges have discovered how early admission programs easily permit them
to manipulate numbers in order to elevate, however marginally, their US
News rankings. For example, an early decision applicant will almost
certainly enroll, thus instantly boosting the college's yield rate.
Who takes most advantage of early admissions and its generous payoffs?
Primarily children from affluent families, students for whom a college's
financial aid offer isn't a deal breaker. Because early decision
programs in particular lock needy students into a single college, they
are unable to compare or negotiate financial aid packages among schools.
The authors contend that colleges also exploit the monopoly power
granted through early decision programs in order to hold down their
financial aid budgets. Furthermore, students with access to good
information about early admission programs, including their improved
chances of admission, also gain. And, again, such students tend to be
affluent. Reliable information, the authors found, is a function of
whether students attend public high schools where many students do not
go to college or elite private schools and highly regarded public
schools where most students do attend college.
Among the most compelling passages in The Early Admissions Game is its
description of the elaborate, back-channel "slotting" operations by
which highly skilled and well-connected high school counselors work hand
in hand with elite college admissions officers to place students. To
outsiders, such collaboration might be scandalous, but for some students
recently accepted to places like Harvard and Yale whom the authors
interviewed, it's rather ho-hum. Listen to Mira (Harvard '98): "My
counselor has a good relationship with the Harvard admissions office. He
handpicks people for admission and tells Harvard who to admit." Or Dan
(Yale '98): "If I wanted to attend Yale, [the counselor] would get me
No book could paint such a damning portrait without offering suggestions
for reform of a system that produces such inequitable results. The
authors discuss various options, including the frequently suggested
proposal that colleges agree to a ban on early admission programs.
That's not likely to fly, the authors argue, because any given college
would have great incentive to violate the ban by picking off its
competitors' most promising applicants. "If we gave it up," Harvard
admissions dean William Fitzsimmons suggested, "other institutions
inside and outside the Ivy League would carve up our class and our
faculty would carve us up."
As an alternative to the current system, the authors propose to set up
an independent, Internet-operated clearinghouse, through which students
could state their first preference for college without a binding
commitment. The clearinghouse would share the information among all
participating colleges in order to preclude any deception. Colleges,
which currently spend a great deal of money on statistical models trying
to predict which students will ultimately enroll, could rely instead on
the students' stated preferences. Such a simple, relatively inexpensive
solution would also diminish the importance of the sorts of back-channel
slotting operations that now give privileged applicants such an
advantage in the early admissions game.
Meanwhile, however, there's little reason to hope the game will become
more equitable anytime soon. Elite colleges appear eager to install
early admissions programs as fixtures for building and managing their
entering classes. As of December, for example, the University of
Pennsylvania had already filled nearly half its freshman class with
early admits. At Yale and Columbia, more than 40 percent of entering
classes was already spoken for. Millions more high school students from
increasingly well-educated families will continue to place their hopes
and dreams on a tiny fraction of colleges that admit an increasingly
smaller percentage of those
who apply. At Harvard, for example, the acceptance rate of 11 percent in
the year 2000 was nearly half what it was in 1990. By midyear, testing
companies had reported surges in registrations for taking entrance
exams, with ACT Inc. boasting its biggest gain in thirty-five years.
All this in a nation where nearly 40 percent of adults believe they
currently are, or will be, among the richest 1 percent of Americans. Who
knows, maybe we'll all get lucky.
We all had our youthful indiscretions that haunt or amuse us for the rest of our lives. Mine was conservatism.
The SAT has been on the ropes lately. The University of California
system has threatened to quit using the test for its freshman
admissions, arguing that the exam has done more harm than good. The
State of Texas, responding to a federal court order prohibiting its
affirmative action efforts, has already significantly curtailed the
importance of the SAT as a gatekeeper to its campuses. Even usually
stodgy corporate types have started to beat up on the SAT. Last year,
for example, a prominent group of corporate leaders joined the National
Urban League in calling upon college and university presidents to quit
placing so much stock in standardized admissions tests like the SAT,
which they said were "inadequate and unreliable" gatekeepers to college.
Then again, if the SAT is anything, it's a survivor. The SAT
enterprise--consisting of its owner and sponsor, the College Board, and
the test's maker and distributor, the Educational Testing Service--has
gamely reinvented itself over the years in myriad superficial ways,
hedging against the occasional dust-up of bad public relations. The SAT,
for example, has undergone name changes over the years in an effort to
reflect the democratization of higher education in America and
consequent changes in our collective notions about equal opportunity.
But through it all, the SAT's underlying social function--as a sorting
device for entry into or, more likely, maintenance of American
elitehood--has remained ingeniously intact, a firmly rooted icon of
American notions about meritocracy.
Indeed, the one intangible characteristic of the SAT and other
admissions tests that the College Board would never want to change is
the virtual equation, in the public's mind, of test scores and academic
talent. Like the tobacco companies, ETS and the College Board (both are
legally nonprofit organizations that in many respects resemble
profit-making enterprises) put a cautionary label on the product.
Regarding their SAT, the organizations are obliged by professional codes
of proper test practices to inform users of standardized admissions
tests that the exams can be "useful" predictors of later success in
college, medical school or graduate school, when used in conjunction
with other factors, such as grades.
But the true place of admissions testing in America isn't always so
appropriate. Most clear-eyed Americans know that results on the SAT,
Graduate Record Exam or the Medical College Admission Test are widely viewed as synonymous with academic talent in higher education. Whether it's true or not--and there's lots of evidence that it's not--is quite beside the point.
Given the inordinate weight that test scores play in the American
version of meritocracy, it's no surprise that federal courts have been
hearing lawsuits from white, middle-class law school applicants
complaining they were denied admission to law school even though their
LSAT scores were fifty points greater than a minority applicant who was
admitted; why neoconservative doomsayers warn that the academic quality
of America's great universities will plummet if the hordes of unwashed
(read: low test scores) are allowed entry; why articles are written
under titles like "Backdoor Affirmative Action," arguing that
de-emphasizing test scores in Texas and California is merely a covert
tactic of public universities to beef up minority enrollments in
response to court bans on affirmative action.
Indeed, Rebecca Zwick, a professor of education at the University of
California, Santa Barbara, and a former researcher at the Educational
Testing Service, wrote that "Backdoor Affirmative Action" article for
Education Week in 1999, implying that do-gooders who place less
emphasis on test scores in order to raise minority enrollments are
simply blaming the messenger. And so it should not be surprising that
the same author would provide an energetic defense of the SAT and
similar exams in her new book, Fair Game? The Use of Standardized
Admissions Tests in Higher Education.
Those, like Zwick, who are wedded to the belief that test scores are
synonymous with academic merit will like this concise book. They will
praise its 189 pages of text as, finally, a fair and balanced
demystification of the esoteric world of standardized testing. Zwick and
her publisher are positioning the book as the steady, guiding hand
occupying the sensible middle ground in an emotional debate that they
claim is dominated by journalists and other uninformed critics who don't
understand the complex subject of standardized testing. "All too
often...discussions of testing rely more on politics or emotion than on
fact," Zwick says in her preface. "This book was written with the aim of
equipping contestants in the inevitable public debates with some solid
information about testing."
If only it were true. Far from reflecting the balanced approach the
author claims, the book is thinly disguised advocacy for the status quo
and a defense of the hegemony of gatekeeping exams for college and
university admissions. It could be more accurately titled (without the
bothersome question mark) "Fair Game: Why America Needs the SAT."
As it stands, the research staff of the College Board and the
Educational Testing Service, Zwick's former employer, might as well have
written this book, as she trots out all the standard arguments those organizations have used for years to show why healthy doses of standardized testing are really good for American education. At almost every opportunity, Zwick quotes an ETS or College Board study in the most favorable light, couching it as the final word on a particular issue, while casting aspersion on
other studies and researchers (whose livelihoods don't depend on selling
tests) that might well draw different conclusions. Too often Zwick
provides readers who might be unfamiliar with the research about testing
with an overly simplistic and superficial treatment. At worst, she
leaves readers with grossly misleading impressions.
After providing a quick and dirty account of IQ testing at the turn of
the last century, a history that included the rabidly eugenic beliefs of
many of the early testmakers and advocates in Britain and the United
States ("as test critics like to point out," Zwick sneers), the author
introduces readers to one of the central ideologies of mental testing to
sort a society's young for opportunities for higher education. Sure,
mental testing has brought some embarrassing moments in history that we
moderns frown on nowadays, but the testing movement has had its good
guys too. Rather than being a tool to promote and protect the interests
of a society's most privileged citizens, the cold objectivity of
standardized testing remains an important goal for exercise of
According to this belief, standardized testing for admission to college
serves the interest of meritocracy, in which people are allowed to shine
by their wits, not their social connections. That same ideology, says
Zwick, drove former Harvard president James Bryant Conant, whom Zwick
describes as a "staunch supporter of equal opportunity," in his quest to
establish a single entrance exam, the SAT, for all colleges. Conant, of
course, would become the first chairman of the board of the newly formed
Educational Testing Service. But, as Nicholas Lemann writes in his 1999
book The Big Test: The Secret History of the American
Meritocracy, Conant wasn't nearly so interested in widening
opportunity to higher education as Zwick might think. Conant was keen on
expanding opportunity, but, as Lemann says, only for "members of a tiny
cohort of intellectually gifted men." Disillusioned only with the form
of elitism that had taken shape at Harvard and other Ivy League
colleges, which allotted opportunities based on wealth and parentage,
Conant was nevertheless a staunch elitist, an admirer of the
Jeffersonian ideal of a "natural aristocracy." In Conant's perfect
world, access to this new kind of elitehood would be apportioned not by
birthright but by performance on aptitude tests. Hence the SAT, Lemann
writes, "would finally make possible the creation of a natural
The longstanding belief that high-stakes mental tests are the great
equalizer of society is dubious at best, and at worst a clever piece of
propaganda that has well served the interests of American elites. In
fact, Alfred Binet himself--among the fathers of IQ testing, who would
invent the first version of the Stanford-Binet intelligence test, the
precursor to the modern SAT--observed the powerful relationship between
one's performance on his so-called intelligence test and a child's
social class, a phenomenon Binet described in his 1916 book The
Development of Intelligence in Children.
And it's the same old story with the SAT. Look at the college-bound high
school seniors of 2001 who took the SAT, and the odds are still firmly
stacked against young people of modest economic backgrounds' beating the
SAT odds. A test-taker whose parents did not complete high school can
expect to score fully 171 points below the SAT average, College Board
figures show. On the other hand, high schoolers whose moms and dads have
graduate degrees can expect to outperform the SAT average by 106 points.
What's more, the gaps in SAT performance between whites and blacks and
between whites and Mexican-Americans have only ballooned in the past ten
years. The gap between white and black test-takers widened five points
and eleven points on the SAT verbal and math sections, respectively,
between 1991 and 2001. SAT score gaps between whites and
Mexican-Americans surged a total of thirty-three points during that same
For critics of the national testing culture, such facts are troubling
indeed, suggestive of a large web of inequity that permeates society and
the educational opportunities distributed neatly along class and race
lines, from preschool through medical school. But for Zwick, the notion
of fairness when applied to standardized admissions tests boils down to
a relatively obscure but standard procedure in her field of
"psychometrics," which is in part the study of the statistical
properties of standardized tests.
Mere differences in average test scores between most minority groups and
whites or among social classes isn't all that interesting to Zwick. More
interesting, she maintains, is the comparative accuracy of test scores
in predicting university grades between whites and other racial groups.
In this light, she says, the SAT and most standardized admissions tests
are not biased against blacks, Latinos or Native Americans. In fact, she
says, drawing on 1985 data from a College Board study that looked at
forty-five colleges, those minority groups earned lower grades in
college than predicted by their SAT scores--a classic case of
"overprediction" that substantiates the College Board claim that the SAT
is more than fair to American minorities. By contrast, if the SAT is
unfair to any group, it's unfair to whites and Asian-Americans, because
they get slightly better college grades than the SAT would predict,
Then there's the odd circumstance when it comes to standardized
admissions tests and women. A number of large studies of women and
testing at the University of California, Berkeley, the University of
Michigan and other institutions have consistently shown that while women
(on average) don't perform as well on standardized tests as male
test-takers do, women do better than men in actual classroom work.
Indeed, Zwick acknowledges that standardized tests, unlike for most
minority groups, tend to "underpredict" the actual academic performance
But on this question, as with so many others in her book, Zwick's
presentation is thin, more textbookish than the thorough examination and
analysis her more demanding readers would expect. Zwick glosses over a
whole literature on how the choice of test format, such as
multiple-choice versus essay examinations, rewards some types of
cognitive approaches and punishes others. For example, there's evidence
to suggest that SAT-type tests dominated by multiple-choice formats
reward speed, risk-taking and other surface-level "gaming" strategies
that may be more characteristic of males than of females. Women and
girls may tend to approach problems somewhat more carefully, slowly and
thoroughly--cognitive traits that serve them well in the real world of
classrooms and work--but hinder their standardized test performance
compared with that of males.
Beyond Zwick's question of whether the SAT and other admissions tests
are biased against women or people of color is the perhaps more basic
question of whether these tests are worthwhile predictors of academic
performance for all students. Indeed, the ETS and the College Board sell
the SAT on the rather narrow promise that it helps colleges predict
freshman grades, period. On this issue, Zwick's presentation is not a
little pedantic, seeming to paint anyone who doesn't claim to be a
psychometrician as a statistical babe in the woods. Zwick quotes the
results of a College Board study published in 1994 finding that one's
SAT score by itself accounts for about 13 percent of the differences in
freshman grades; that one's high school grade average is a slightly
better predictor of college grades, accounting for about 15 percent of
the grade differences among freshmen; and that the SAT combined with
high school grades is a better predictor than the use of grades alone.
In other words, it's the standard College Board line that the SAT is
"useful" when used with other factors in predicting freshman grades. (It
should be noted that Zwick, consistent with virtually all College Board
and ETS presentations, reports her correlation statistics without
converting them into what's known as "R-squared" figures. In my view,
the latter statistics provide readers with a common-sense understanding
of the relative powers of high school grades and test scores in
predicting college grades. I have made those conversions for readers in
the statistics quoted above.)
Unfortunately, Zwick misrepresents the real point that test critics make
on the question of predictive validity of tests like the SAT. The
salient issue is whether the small extra gains in predicting freshman
grades that the SAT might afford individual colleges outweigh the social
and economic costs of the entire admissions testing enterprise, costs
borne by individual test-takers and society at large.
Even on the narrow question of the usefulness of the SAT to individual
colleges, Zwick does not adequately answer what's perhaps the single
most devastating critique of the SAT. For example, in the 1988 book
The Case Against the SAT, James Crouse and Dale Trusheim argued
compellingly that the SAT is, for all practical purposes, useless to
colleges. They showed, for example, that if a college wanted to maximize
the number of freshmen who would earn a grade-point average of at least
2.5, then the admissions office's use of high school rank alone as the
primary screening tool would result in 62.2 percent "correct"
admissions. Adding the SAT score would improve the rate of correct
decisions by only about 2 in 100. The researchers also showed,
remarkably, that if the admissions objective is broader, such as
optimizing the rate of bachelor's degree completion for those earning
grade averages of at least 2.5, the use of high school rank by itself
would yield a slightly better rate of prediction than if the SAT scores
were added to the mix, rendering the SAT counterproductive. "From a
practical viewpoint, most colleges could ignore their applicants' SAT
score reports when they make decisions without appreciably altering the
academic performance and the graduation rates of students they admit,"
Crouse and Trusheim concluded.
At least two relatively well-known cases of colleges at opposite ends of
the public-private spectrum, which have done exactly as Crouse and
Trusheim suggest, powerfully illustrate the point. Consider the
University of Texas system, which was compelled by a 1996 federal
appeals court order, the Hopwood decision, to dismantle its
affirmative-action admissions programs. The Texas legislature responded
to the threat of diminished diversity at its campuses with the "top 10
percent plan," requiring public universities to admit any student
graduating in the top 10 percent of her high school class, regardless of
Zwick, of course, is obliged in a book of this type to mention the Texas
experience. But she does so disparagingly and without providing her
readers with the most salient details on the policy's effects in terms
of racial diversity and the academic performance of students. Consider
the diversity question. While some progressives might have first
recoiled at the new policy as itself an attack on affirmative action,
that has not been the case. In fact, at the University of Texas at
Austin, the racial diversity of freshman classes has been restored to
pre-Hopwood levels, after taking an initial hit. Indeed, the
percentage of white students at Austin reached a historic low point in
2001, at 61 percent. What's more, the number of high schools sending
students to the state's flagship campus at Austin has significantly
broadened. The "new senders" to the university include more inner-city
schools in Dallas, Houston and San Antonio, as well as more rural
schools than in the past, according to research by UT history professor
David Montejano, among the plan's designers.
But the policy's impact on academic performance at the university might
be even more compelling, since that is the point upon which
neoconservative critics have been most vociferous in their condemnations
of such "backdoor" affirmative action plans that put less weight on test
scores. A December 1999 editorial in The New Republic typified
this road-to-ruin fiction: Alleging that the Texas plan and others like
it come "at the cost of dramatically lowering the academic
qualifications of entering freshmen," the TNR editorial warned,
these policies are "a recipe for the destruction of America's great
Zwick, too, neglects to mention the facts about academic performance of
the "top 10 percenters" at the University of Texas, who have proven the
dire warnings to be groundless. At every SAT score interval, from less
than 900 to scores of 1,500 and higher, in the year 2000, students
admitted without regard to their SAT score earned better grades than
their non-top 10 percent counterparts, according to the university's
latest research report on the policy.
Or, consider that the top 10 percenters average a GPA of 3.12 as
freshmen. Their SAT average was about 1,145, fully 200 points lower than
non-top 10 percent students, who earned slightly lower GPAs of 3.07. In
fact, the grade average of 3.12 for the automatically admitted students
with moderate SAT scores was equal to the grade average of non-top 10
percenters coming in with SATs of 1,500 and higher. The same pattern has
held across the board, and for all ethnic groups.
Bates College in Lewiston, Maine, is one case of a college that seemed
to anticipate the message of the Crouse and Trusheim research. Bates ran
its own numbers and found that the SAT was simply not a sufficiently
adequate predictor of academic success for many students and abandoned
the test as an entry requirement several years ago. Other highly
selective institutions have similar stories to tell, but Bates serves to
illustrate. In dropping the SAT mandate, the college now gives students
a choice of submitting SATs or not. But it permits no choice in
requiring that students submit a detailed portfolio of their actual work
and accomplishments while in high school for evaluation, an admissions
process completed not just by admissions staff but by the entire Bates
As with the Texas automatic admission plan, Zwick would have been
negligent not to mention the case of Bates, and she does so in her
second chapter; but it's an incomplete and skewed account. Zwick quotes
William Hiss, the former dean of admissions at Bates, in a 1993
interview in which he suggests that the Bates experience, while perhaps
appropriate for a smaller liberal arts college, probably couldn't be
duplicated at large public universities. That quote well serves Zwick's
thesis that the SAT is a bureaucratically convenient way to maintain
academic quality at public institutions like UT-Austin and the
University of California. "With the capability to conduct an intensive
review of applications and the freedom to consider students' ethnic and
racial backgrounds, these liberal arts colleges are more likely than
large university systems to succeed in fostering diversity while toeing
the line on academic quality," Zwick writes.
But Zwick neglects to mention that Hiss has since disavowed his caveats
about Bates's lessons for larger public universities. In fact, Hiss, now
a senior administrator at the college, becomes palpably irritated at
inequalities built into admissions systems that put too much stock in
mental testing. He told me in a late 1998 interview, "There are twenty
different ways you can dramatically open up the system, and if you
really want to, you'll figure out a way. And don't complain to me about
the cost, that we can't afford it."
Zwick punctuates her brief discussion of Bates and other institutions
that have dropped the SAT requirement by quoting from an October 30,
2000, article, also in The New Republic, that purportedly
revealed the "dirty little secret" on why Bates and other colleges have
abandoned the SAT. The piece cleverly observed that because SAT
submitters tend to have higher test scores than nonsubmitters, dropping
the SAT has the added statistical quirk of boosting SAT averages in
U.S. News & World Report's coveted college rankings. That
statistical anomaly was the smoking gun the TNR reporter needed
to "prove" the conspiracy.
But to anyone who has seriously researched the rationales colleges have
used in dropping the SAT, the TNR piece was a silly bit of
reporting. At Bates, as at the University of Texas, the SAT
"nonsubmitters" have performed as well or better academically than
students who submitted SATs, often with scores hundreds of points lower
than the SAT submitters. But readers of Fair Game? wouldn't know
One could go on citing many more cases in which Zwick misleads her
readers through lopsided reporting and superficial analysis, such as her
statements that the Graduate Record Exam is about as good a predictor of
graduate school success as the SAT is for college freshmen (it's not,
far from it), or her overly optimistic spin on the results of many
studies showing poor correlations between standardized test scores and
later career successes.
Finally, Zwick's presentation might have benefited from a less
textbookish style, with more enriching details and concrete examples.
Instead, she tries to position herself as a "just the facts" professor
who won't burden readers with extraneous contextual details or accounts
of the human side of the testing culture. But like the enormously
successful--at least in commercial terms--standardized tests themselves,
which promote the entrenched belief in American society that genuine
learning and expert knowledge are tantamount to success on Who Wants
to Be a Millionaire-type multiple-choice questions, books like Fair Game? might be the standardized account that some readers really want.
As a Russian studies major at Yale in the 1970s, I observed Soviet
"elections" that were conducted more fairly than the 2002 Yale
Corporation's board of trustees election. Why is the Yale Corporation so
threatened by the candidacy of a prominent New Haven pastor who cares
about Yale and its workers?
The last time a prospective trustee was nominated by petition was almost
forty years ago, when William Horowitz became Yale's first elected
Jewish trustee. Back then 250 signatures were required for ballot
qualification; that has since been raised to 3 percent of eligible
alumni--some 3,200 signatures today. The Rev. Dr. W. David Lee, an
African-American pastor of one of New Haven's largest churches and a
graduate of the Yale Divinity School, gathered 4,870 signatures. If
elected, he would be the only New Haven resident other than Yale's
president to sit on the corporation's board.
But he is also supported by Yale's employee unions, and the
university--one of America's great institutions of higher learning--does
not like that. Normally, the Standing Committee for the Nomination of
Alumni Fellows of the Association of Yale Alumni nominates two or three
alumni to stand for election. This year, apparently threatened by Lee's
grassroots efforts, the committee nominated only one, Maya Lin, creator
of the Vietnam War memorial, around whom the Yale Corporation and its
allies could rally.
As an alumnus, I received no fewer than six mailings--from the alumni
organization, from wealthy Yale alumni, from former corporation board
members--all criticizing Lee for failing to identify who paid for his
mailing, for his "aggressive campaign" and for his "ties to special
interests, labor unions."
In a campaign flier (containing no disclosure of who paid for it), the
Association of Yale Alumni quoted comments from Lee critical of the
university. It is not surprising that a minister of a large church at
which many Yale employees worship might at times express substantial
differences with a university that pays many of those workers less than
a living wage.
As if the Yale Corporation had not already made its interests known,
even the ballot package--paid for by the university and sent to all
voters--was slanted in favor of the corporation's candidate. The
official publication intimates support for its favored candidate from
"over 700 alumni," including the Association of Yale Alumni, the
officers of Yale college classes and Yale clubs and other alumni
associations. The other candidate, the Yale Corporation stated in the
ballot package, was "nominated by petition"--(as though Lee's 4,870
signatures did not indicate the support of those alumni).
Reminiscent of elections conducted in one-party states, the corporation
refused to allow an observer to be present when the ballots are counted.
It is not in the Yale bylaws, he was told.
It is unfortunate that Yale, which has produced so many national
leaders, has earned a widespread reputation for its antiunion activities
[see Kim Phillips-Fein, "Yale Bites Unions," July 2, 2001]. To all but
declare war on Yale's workers and its union, and on an outstanding young
New Haven leader, can only exacerbate city-university tensions and roil
Yale's already troubled labor-management waters.
How could one pro-worker candidate who aspires to a lone seat on a board
of nineteen of America's most influential people unleash the fury of an
entire university hierarchy? Why do powerful people--the kind who sit on
Yale's board--feel so threatened by a local minister? Why can't one of
the world's most prestigious universities--with a multibillion-dollar
endowment--pay its workers a living wage?
For God. For Country. For Yale.
"Thirty years from now the big university campuses will be relics," business "guru" Peter Drucker proclaimed in Forbes five years ago. "It took more than 200 years for the printed book to create the modern school. It won't take nearly that long for the [next] big change." Historian David Noble echoes Drucker's prophecies but awaits the promised land with considerably less enthusiasm. "A dismal new era of higher education has dawned," he writes in Digital Diploma Mills. "In future years we will look upon the wired remains of our once great democratic higher education system and wonder how we let it happen."
Most readers of this magazine will side with Noble in this implicit debate over the future of higher education. They will rightly applaud his forceful call for the "preservation and extension of affordable, accessible, quality education for everyone" and his spirited resistance to "the commercialization and corporatization of higher education." Not surprisingly, many college faculty members have already cheered Noble's critique of the "automation of higher education." Although Noble himself is famously resistant to computer technology, the essays that make up this book have been widely circulated on the Internet through e-mail, listservs and web-based journals. Indeed, it would be hard to come up with a better example of the fulfillment of the promise of the Internet as a disseminator of critical ideas and a forum for democratic dialogue than the circulation and discussion of Noble's writings on higher education and technology.
Noble performed an invaluable service in publishing online the original articles upon which this book is largely based. They helped initiate a broad debate about the value of information technology in higher education, about the spread of distance education and about the commercialization of universities. Such questions badly need to be asked if we are to maintain our universities as vital democratic institutions. But while the original essays were powerful provocations and polemics, the book itself is a disappointing and limited guide to current debates over the future of the university.
One problem is that the book has a dated quality, since the essays are reproduced largely as they were first circulated online starting in October 1997 (except for some minor editorial changes and the addition of a brief chapter on Army online education efforts). In those four-plus years, we have watched the rise and fall of a whole set of digital learning ventures that go unmentioned here. Thus, Noble warns ominously early in the book that "Columbia [University] has now become party to an agreement with yet another company that intends to peddle its core arts and science courses." But only in a tacked-on paragraph in the next to last chapter do we learn the name of the company, Fathom, which was launched two years ago, and of its very limited success in "peddling" those courses, despite Columbia president George Rupp's promise that it would become "the premier knowledge portal on the Internet." We similarly learn that the Western Governors' Virtual University "enrolled only 10 people" when it opened "this fall" (which probably means 1998, when Noble wrote the original article) but not that the current enrollment, as of February 2002, is 2,500. For the most part, the evidence that Noble presents is highly selective and anecdotal, and there are annoyingly few footnotes to allow checking of sources or quotes.
The appearance of these essays with almost no revision from their initial serial publication on the Internet also helps to explain why Noble's arguments often sound contradictory. On page 36, for example, he may flatly assert that "a dismal new era of higher education has dawned"; but just twenty-four pages later, we learn that "the tide had turned" and the "the bloom is off the rose." Later, he reverses course on the same page, first warning that "one university after another is either setting up its own for-profit online subsidiary or otherwise working with Street-wise collaborators to trade on its brand name in soliciting investors," but then acknowledging (quoting a reporter) that administrators have realized "that putting programs online doesn't necessarily bring riches." When Noble writes that "far sooner than most observers might have imagined, the juggernaut of online education appeared to stall," he must have himself in mind, two chapters earlier. Often, Noble is reflecting the great hysteria about online education that swept through the academy in the late 1990s. At other times (particularly when the prose has been lightly revised), he indicates the sober second thoughts that have more recently emerged, especially following the dot-com stock market crash in early 2000.
In the end, one is provided remarkably few facts in Digital Diploma Mills about the state of distance education, commercialization or the actual impact of technology in higher education. How many students are studying online? Which courses and degrees are most likely to appear online? How many commercial companies are involved in online education? To what degree have faculty employed computer technology in their teaching? What has been the impact on student learning? Which universities have changed their intellectual property policies in response to digital developments? One searches in vain in Noble's book for answers, or even for a summary of the best evidence currently available.
Moreover, Noble undercuts his own case with hyperbole and by failing to provide evidence to support his charges. For example, most readers of his book will not realize that online distance education still represents a tiny proportion of college courses taken in the United States--probably less than 5 percent. Noble sweepingly maintains, "Study after study seemed to confirm that computer-based instruction reduces performance levels." But he doesn't cite which studies. He also writes, "Recent surveys of the instructional use of information technology in higher education clearly indicate that there have been no significant gains in pedagogical enhancement." Oddly, here Noble picks up the rhetoric of distance-education advocates who argue that there is "no significant difference" in learning outcomes between distance and in-person classes.
Many commentators have pointed out Noble's own resistance to computer technology. He refuses to use e-mail and has his students hand-write their papers. Surely, there is no reason to criticize Noble for this personal choice (though one feels sorry for his students). Noble himself responds defensively to such criticisms in the book's introduction: "A critic of technological development is no more 'anti-technology' than a movie critic is 'anti-movie.'" Yes, we do not expect movie critics to love all movies, but we do expect them to go to the movies. Many intelligent and thoughtful people don't own television sets, but none of them are likely to become the next TV critic for the New York Times. Thus, Noble's refusal to use new technology, even in limited ways, makes him a less than able guide to what is actually happening in technology and education.
Certainly, Noble's book offers little evidence of engagement with recent developments in the instructional technology field. One resulting distortion is that some readers will think that online distance education is the most important educational use of computer technology. Actually, while very few faculty teach online courses, most have integrated new technology into their regular courses--more than three-fifths make use of e-mail; more than two-fifths use web resources, according to a 2000 campus computing survey. And few of these faculty members can be characterized, as Noble does in his usual broad-brush style, as "techno-zealots who simply view computers as the panacea for everything, because they like to play with them."
Indeed, contrary to Noble's suggestion, some of the most thoughtful and balanced criticisms of the uses of technology in education have come from those most involved with its application in the classroom. Take, for example, Randy Bass, a professor of English at Georgetown University, who leads the Visible Knowledge Project (http://crossroads.georgetown.edu/vkp), a five-year effort to investigate closely whether technology improves student learning. Bass has vigorously argued that technological tools must be used as "engines of inquiry," not "engines of productivity." Or Andrew Feenberg, a San Diego State University distance-education pioneer as well as a philosopher and disciple of Herbert Marcuse, who has insisted that educational technology "be shaped by educational dialogue rather than the production-oriented logic of automation," and that such "a dialogic approach to online education...could be a factor making for fundamental social change."
One would have no way of knowing from Noble's book that the conventional wisdom of even distance-education enthusiasts is now that cost savings are unlikely, or that most educational technology advocates, many of them faculty members, see their goal as enhancing student learning and teacher-student dialogue. Noble, in fact, never acknowledges that the push to use computer technology in the classroom now emanates at least as much from faculty members interested in using these tools to improve their teaching as it does from profit-seeking administrators and private investors.
Noble does worry a great deal about the impact of commercialization and commodification on our universities--a much more serious threat than that posed by instructional technology. But here, too, the book provides an incomplete picture. Much of Noble's book is devoted to savaging large public and private universities--especially UCLA, which is the subject of three chapters--for jumping on the high-technology and distance-education bandwagons. Yet at least as important a story is the emergence of freestanding, for-profit educational institutions, which see online courses as a key part of their expansion strategy. For example, while most people think of Stanley Kaplan as a test preparation operation, it is actually a subsidiary of the billion-dollar Washington Post media conglomerate and owns a chain of forty-one undergraduate colleges and enrolls more than 11,000 students in a variety of online programs, ranging from paralegal training to full legal degrees at its Concord Law School, which advertises itself as "the nation's only entirely online law school." This for-profit sector is growing rapidly and becoming increasingly concentrated in a smaller number of corporate hands. The fast-growing University of Phoenix is now the largest private university in the United States, with more than 100,000 students and almost one-third in online programs, which are growing more than twice as fast as its brick-and-mortar operation. Despite a generally declining stock market, the price of the tracking stock for the University of Phoenix's online operation has increased more than 80 percent in the past year.
As the Chronicle of Higher Education reported last year, "consolidation...is sweeping the growing for-profit sector of higher education," fueled by rising stock prices in these companies. This past winter, for example, Education Management Corporation, with 28,000 students, acquired Argosy Education Group and its 5,000 students. The threat posed by these for-profit operations is rooted in their ability to raise money for expansion through Wall Street ("Wall Street," jokes the University of Phoenix's John Sperling, "is our endowment") and by diminishing public support for second-tier state universities and community colleges (the institutions from which for-profits are most likely to draw new students). Yet, except for an offhand reference to Phoenix, Digital Diploma Mills says nothing about these publicly traded higher-education companies. And these for-profit schools are actually only a small part of the more important and much broader for-profit educational "sector," which is also largely ignored by Noble and includes hundreds of vendors of different products and services, and whose size is now in the hundreds of billions of dollars--what Morgan Stanley Dean Witter calls, without blushing, an "addressable market opportunity at the dawn of a new paradigm."
A strong cautionary tale is provided by Noble, that of the involvement of UCLA's extension division with a commercial company called Onlinelearning.net--the most informative chapter in the book. He shows how some UCLA administrators as early as 1993 greedily embraced a vision of riches to be made in the online marketing of the college's extension courses. UCLA upper management apparently bought the fanciful projections of their commercial partners that the online venture would generate $50 million per year within five years, a profit level that quickly plummeted below $1 million annually. But Noble conflates the UCLA online-extension debacle with a more benign effort by the UCLA College of Letters and Sciences, beginning in 1997, to require all instructors to post their course syllabuses on the web. He seems unwilling to draw distinctions between the venal and scandalous actions of top UCLA administrators and the sometimes ham-handed efforts of other administrators to get UCLA faculty to enhance their classes by developing course websites, a fairly common educational practice and a useful convenience for students. Three-fifths of UCLA students surveyed said that the websites had increased interactions with instructors, and social science faculty recently gave the website initiative a mostly positive evaluation.
Sounding an "early alarm" so that faculty members can undertake "defensive preparation and the envisioning of alternatives" is how Noble explains his purpose in writing Digital Diploma Mills. But will faculty be well armed if they are unaware of the actual landscape they are traversing? In the end, Noble leaves us only with a deep and abiding suspicion of both technology and capitalism. His analysis of technology and education does echo Marx's critique of capitalism, with its evocation of concepts like commodification, alienation, exchange and labor theories of value. But unlike Marx, who produced a critical analysis of the exploitative nature of early capitalist production without outright rejection of the technology that made industrialization possible, Noble cannot manage the same feat.
In the current political climate, Noble's undifferentiated suspicion of technology hinders us more than it helps us. Are we prepared to follow him in his suspicion of any use of technology in higher education? Are faculty members willing to abjure e-mail in communicating with their students and colleagues? Are instructors at small colleges with limited library collections prepared to tell their students not to use the 7 million online items in the Library of Congress's American Memory collection? Are they ready to say to students with physical disabilities that limit their ability to attend on-campus classes or conduct library research that they can't participate in higher education? Are faculty at schools with working adults who struggle to commute to campus prepared to insist that all course materials be handed directly to students rather than making some of it available to their students online?
Similarly, what lines are we prepared to draw with respect to commercialization of higher education within the capitalist society in which we live? Are faculty willing to abandon publishing their textbooks with large media conglomerates and forgo having their books sold through nationwide bookstore chains? Are they prepared to say to working-class students who view higher education as the route to upward mobility that they cannot take courses that help them in the job market?
Noble's answer to most of these questions would undoubtedly be yes, insisting, as he does, that anything less than the "genuine interpersonal interaction," face to face, undermines the sanctity of the essential teacher-student relationship. In a March 2000 Chronicle of Higher Education online dialogue about his critique of technology in education, Noble complained that no one had offered "compelling evidence of a pedagogical advantage" in online instruction. (He pristinely refused to join online, and had a Chronicle reporter type in his answers relayed over the phone.) A student at UCLA, who had unexpectedly taken an online course, noted in her contribution to the Q&A that because she tended to be "shy and reserved," e-mail and online discussion groups allowed her to speak more freely to her instructor, and that she thought she retained more information in the online course than in her traditional face-to-face classes at UCLA. Noble rejected the student's conclusion that the online course had helped her find her voice, arguing that writing was "in reality not a solution, but an avoidance of the difficulty." "Speaking eloquently, persuasively, passionately," he concluded, "is essential to citizenship in a democracy." Putting aside the insensitivity of Noble's reply, his position, as Andrew Feenberg points out in Transforming Technology: A Critical Theory Revisited, is reminiscent of Plato's fear that writing (the cutting-edge instructional technology in the ancient world) would replace spoken discourse in classical Greece, thus destroying the student-teacher relationship. (Ironically, as Feenberg also notes, "Plato used a written text as the vehicle for his critique of writing, setting a precedent" for current-day critics of educational technology like Noble who have circulated their works on the Internet.)
The conservative stance of opposing all change--no technology, no new modes of instruction--is appealing because it keeps us from any possible complicity with changes that undercut existing faculty rights and privileges. But opposition to all technology means that we are unable to support "open source" technological innovations (including putting course materials online free) that constitute a promising area of resistance to global marketization. And it makes it impossible to work for protections that might be needed in a new environment. Finally, it leaves unchanged the growing inequality between full-time and part-time faculty that has redefined labor relations in the contemporary university--the real scandal of the higher-education workplace. Without challenging the dramatic differences in wages and workloads of full professors and adjunct instructors, faculty rejection of educational technology begins to remind us of the narrow privileges that craft workers fought to maintain in the early decades of industrial capitalism at the expense of the unskilled workers flooding into their workplaces.
We prefer to work from a more pragmatic and realistic stance that asks concretely about the benefits and costs of both new technology and new educational arrangements to students, faculty (full- and part-time) and the larger society. Among other things, that means that academic freedom and intellectual property must be protected in the online environment. And the faculty being asked to experiment with new technology need to be provided with adequate support and rewards for their (ad)ventures. As the astute technology commentator Phil Agre wrote when he first circulated Noble's work on the Internet, "the point is neither to embrace or reject technology but to really think through and analyze...the opportunities that technology presents for more fully embodying the values of a democratic society in the institutions of higher education."
As the chairman of Artemis Records, the company that released Cornel West's CD, Sketches of My Culture, I considered criticizing Cornel for his association with Lawrence Summers, president of Harvard. Without ever listening to it, Summers attacked West merely for having released a CD, dismissing the entire universe of recorded music as being "unworthy of a Harvard professor." But like most record executives, I'm more tolerant of unorthodox associations than Summers, so I'll continue to judge West by his work and the inspiration it provides.
Among the flurry of press reports sparked by the controversy--most of which alluded to the alleged "rap CD"--quite a few couldn't get the facts straight. The New Republic claimed that West "has spent more time recording a rap CD and stumping for Al Sharpton than doing academic work." In fact, West has canceled only one class in twenty-six years of teaching, and that was several years ago, to deliver a lecture in Ethiopia. West recorded the CD during a leave--a long-established privilege in academia. (Summers himself took a leave from a professorship at Harvard to work for the World Bank.)
A Summers aide has said that the confrontation with West was a "terrible misunderstanding," but it's possible that Summers knew exactly what he was doing, using West the way Bill Clinton used Sister Souljah: to placate conservative elements of his constituency. Not only did Summers harshly criticize West's published work, he acknowledged that he had not read any of it or listened to the CD. Moreover, it's obvious that what disturbs Summers is not the notion of a Harvard professor engaging in political activity but West's particular beliefs: He criticized West's involvement with Bill Bradley, Ralph Nader and Al Sharpton, but Summers himself supported Al Gore (as did West's friend and supporter Henry Louis Gates Jr., head of the Afro-American studies department). Summers has been silent as his supporters have misrepresented West's record and called him names. Two examples: The National Review's Rod Dreher referred to West as a "clownish minstrel" and the New York Daily News's Zev Chafetz called him "a self-promoting lightweight with a militant head of hair."
West's decision to record a CD is in keeping with a commitment to spread his ideals and ideas as far and wide as possible. His book Race Matters has sold more than 350,000 copies and is one of the most influential books on race of the past couple of decades. His other works are used as texts in college classes around the world. There is no other public figure who is welcome in academia, in the media, in both conventional and activist politics and in the religious world.
By the way, Sketches of My Culture is not a "rap" CD. West, like most contemporary music critics, acknowledges that hip-hop is a vital cultural language. But Sketches itself is a concept album that is predominantly spoken word surrounded by r&b music, a montage that includes limited and focused uses of hip-hop language. Like any work of art, it's open to legitimate criticism, but it is clearly a serious attempt to use a modern art form to grapple with the themes that have animated West's career: black history, spirituality and political morality. There is not a word of profanity on it.
The indefatigable West has reached out to poor communities, moderating the crucial final panel at a recent "Rap Summit" and appearing on urban radio shows that had never been graced by the presence of an academic. I have seen the faces of young people inspired by West's linking of their own aspirations to the civil rights struggle and to the great philosophical and religious traditions. He urges them to live up to those examples. It has said something to the broader American community about Harvard that Cornel West is a professor there, and it will say something about Harvard if he is not.
Critics of the war on terror—or even those who slightly question the Bush administration—may now find themselves on a list of members of a fifth column.