In the summer of 1966 the Office of Education published the now famous “Coleman Report,” which assessed the nation’s progress in achieving the school integration mandated by the 1954 Brown v. Board of Education decision. The report was based on a survey conducted by the Educational Testing Service under the direction of sociologist James Coleman, and it included a brief test of cognitive skills. As reported in Nicholas Lemann’s superb book on the history of ETS, The Big Test, these researchers hoped to find that the black-white gap in scores on this test would be smaller in the better-funded Northeastern schools than in the less-well-funded Southern schools. This would justify the use of federal funds to bolster underfunded public schools–an unprecedented policy at the time.
What they found, however, was a black-white score gap virtually as large in better-funded schools as in poorly funded schools. The immediate reaction was dramatic. Coleman dropped the idea of federal support for under-funded public schools, apparently presuming that it wouldn’t help much. The idea was later reinvigorated and passed into law as “Title I” funding for low-achieving, low-income schools. But this policy too seemed to lack a certain confidence. It targeted funds primarily at basic skills remediation, not at a broad, high-quality education for low-income students.
These events turned out to be a prophetic episode in a story that continues to this day. For perhaps the first time, the ETS survey revealed that the racial gap in test scores would be difficult to eliminate. But as important, the episode revealed a certain paradigm for using test scores in educational decision-making. With roots all the way back to the beginning of standardized testing in the early twentieth century, the paradigm is familiar: Based on tests taken early in life, lower-scoring people and groups get less educational attention, or more of a basic-skills education aimed at bringing them to minimal levels of competence, whereas higher-scoring people and groups get a richer education supported by more resources–better-trained teachers, more academically challenging curriculums, better opportunities, etc. The rationale for this “ability paradigm,” as I will call it, has always been a kind of meritocratic efficiency: maximizing the return on society’s investment by investing the most resources in those who, as indicated by test scores, have the ability needed to benefit from those resources.
But in the spirit of reflection occasioned by this anniversary of Brown, one might ask a difficult, two-part question: Has this paradigm all along been a major cause of the racial gap in test scores, and is it now, through this effect, a major remaining barrier to the full integration envisioned in Brown?
The paradigm has always involved a daunting set of assumptions: that there is a core intellectual ability; that there is a level of that ability that is indispensable to benefit from high-quality education; that the level of this ability that one has is fairly stable across the life span; that this ability can be accurately and reliably measured in people from virtually all backgrounds by a cognitive test in a single sitting at almost any time in a person’s development; and that, therefore, scores from these tests can be used to triage students efficiently into ability-appropriate educational tracks early in life. When you look at it, this is a lot to believe in. But like most assumptions, these beliefs are more implicit than explicit. We endorse them largely by using the paradigm they support.