October 16, 2006
The last few weeks witnessed three crucial developments in the practice of admitting students at America’s colleges and universities. Two leading higher educational institutions announced, within days of each other, that they would stop early admissions beginning next fall to make the process fairer to disadvantaged students and less stressful for all applicants. More discreetly, but of no less importance, the Secretary of Education’s Commission on the Future of Higher Education released a depressing report finding that the labyrinthine complexity of the federal financial aid system “has the unfortunate effect of discouraging some low-income students from even applying to college” in the first place. But equally newsworthy was the announcement that the initial results from the recently redesigned Scholastic Aptitude Test saw the nation’s average test score drop to its lowest point in 31 years.
The inauguration of the revamped exam should be accompanied by a robust debate over whether the SAT I, which purports to evaluate a test-takers “reasoning abilities” (rather than any specific knowledge) provides a useful assessment of a high school senior’s readiness to undertake college work. Defenders of the test’s utility argue that in a nation where curricula and grading standards vary widely between high schools, the SAT provides a useful common standard against which a college can compare applicants from different high schools.
An Algebra I course at a school in Texas might be more extensive and rigorous than the same class at a school in Massachusetts. A student who earns an “A” in the former therefore shows more achievement than one who earns the same grade in the latter, but an admissions officer cannot understand the difference of merit between these similar grades without a normalizing evaluative variable, like the SAT I. In this sense, as President Colin S. Diver of Reed College recently argued, the test helps mitigate the disparities engendered by systemic inequalities in American secondary education. Moreover, these advocates say, the exam’s evaluation of “reasoning skills” provides the fairest and most accurate prediction of how well a student will perform in the first year of college, since it tests the skills necessary for success in higher education no matter what area the student chooses to concentrate upon.
But does the SAT actually gauge the qualities fundamental to learning in a collegiate environment? Achieving academically at the undergraduate level involves something beyond the evaluative capabilities of a sentence completion task or algebra problem: the ability to make interpretative arguments about data. That’s what an aspiring chemist does when she writes a report explaining the significance of test results from a laboratory experiment. Although students of history like me don’t balance equations, we do have to know how to construct descriptive and analytical narratives of what went on in the past by studying documents, objects, and images. While my science classmate might spend more time in the lab than in the library, both of us essentially do the same thing: we gather a lot of complicated, heterogeneous information and make an argument about how it all fits together.
However, it isn’t clear that the SAT measures this skill of understanding data, which forms the bedrock of collegiate education. The essay question, where you might expect interpretive abilities to be tested, does not ask students to distill a lot of complicated information through a coherent argument; rather, it asks them to express a point of view based upon their own personal experience. The reading comprehension exercise (in which students answer questions about a short text) does not simulate the practice of sustained, close reading and documentary analysis that students perform in post-secondary study. All of the passage questions ask you to select the “correct” meaning of the passage. But in any good humanities class, meanings are treated as contestable arguments, not truths waiting to be magically extracted from a narrative; students are asked to make arguments defending their own interpretations of the material in papers and exams.
Finally, taking the SAT in today’s environment imposes a heavy financial burden on college-bound students and their families. The procedure is both financially and logistically pricey: $41.50 to take the test; $9.50 for any score report beyond the fourth one; and an entire Saturday morning squandered on filling out tiny bubbles with #2 pencils. Over 50 percent of test-takers still take the test more than once, adding to the financial cost of the procedure. If all this sounds pricey, don’t forget that Americans are also spending $4 billion a year on SAT preparation services–way more than enough to cover the cost of tuition, room, board, and books for every Princeton undergraduate. Moreover, these services have a sizeable positive effect on an individual’s score. A high score on the test can, in a sense, be purchased, and the explanation for the trend that students from financially disadvantaged backgrounds have lower scores may be that they just can’t afford to prepare for a test that claims to require no special preparation. For this reason, the SAT doesn’t live up to its defenders’ lofty claims about the test’s fairness.
Why should colleges consider scores from a test that doesn’t reveal much about the skills a student needs to succeed in school, all the while unnecessarily snatching precious greenbacks from the wallets of financially-stressed families and deepening the pockets of ETS profiteers? The whole racket is about as mind-boggling as those indecipherable analogies.