Standardized Tests Are a New Glass Ceiling

Standardized Tests Are a New Glass Ceiling

Standardized Tests Are a New Glass Ceiling

Women do better in class and worse on tests—and there are consequences.

Copy Link
Facebook
X (Twitter)
Bluesky
Pocket
Email

Hardly a week goes by without a panel, conference, or
 symposium on luring women into STEM (science, technology, engineering, and mathematics) careers. Even the president has joined in: “We’ve got half the population that is way underrepresented in those fields.” He has his numbers right. Women currently receive less than a fifth of all bachelor’s degrees in physics, computer science, and engineering. In the last national count, only 8,851 women had majored in mathematics and statistics.

We’ve heard most of the reasons, not least hostility in laboratories. But a more central cause became apparent as I began researching the teaching and testing of mathematics. Standardized testing in math, where women do significantly worse than men, is setting women back before they even begin college. Since mathematics is the first hurdle for STEM fields, women are unlikely to sign on if they’ve already been told that they don’t measure up. We know that the problem is the test. It’s not the students, because girls and women are getting better grades than boys and men in high-school and college mathematics courses. Without changing our methods for measuring ability, we stand little chance of changing the gender imbalance among our scientists and engineers.

The importance we assign to standardized tests is eclipsing that of assessments by sentient teachers. Each year, more weight is given to scores disgorged by the ACT and the SAT, backstopped by the GRE, MCAT, and LSAT, not to mention standardized Common Core tests, which are given over to firms like Pearson and McGraw-Hill. Computer-awarded scores are touted as objective, whereas grades bestowed by teachers are seen as subjective, if not tainted by biases. (An ACT study intimated that the principal victims of prejudice were boys.)

On last year’s SAT, boys averaged 527 in the mathematics section against 496 for girls—a far wider gulf than elsewhere in the test. The ACT’s gap is smaller, largely because its test is closer to what schools actually teach, but boys are still visibly ahead. In fact, a more reliable gauge is performance in high school before they take tests and in college courses afterward. I did some calculations to see what would happen if the SAT’s mathematics scores reflected classroom grades. If that were the case, girls would not only erase their current 31-point deficit, but would move 32 points ahead of their male classmates. With the ACT, they would gain 28 points and also pass the boys. (I’ve converted ACT scores here to the SAT range.)

Since we know that girls and women are just as intelligent and adaptable as boys and men, why aren’t they faring equally well with an instrument that has been in place for over half a century? I turned to Marcia Linn at the University of California, Berkeley, who has studied grades and scores for over 20 years, especially gender differences in mathematics. “Females turn out to be better course takers,” she has concluded; “males turn out to be better test takers.” She notes that boys are more apt to take physics and computer science, which sharpen quantitative and spatial skills. And more college-aspiring girls come from lower-income homes with fewer resources for tutoring. But what ultimately separates the scores, Linn says, is the “tendency of girls to be more conscientious than boys.”

Diligence pays off in complex class assignments, which results in higher grades. But pausing to ponder can spell death in multiple-choice testing, since speed is crucial for a high score. The ACT’s 60 mathematics problems must be assessed and answered in 60 minutes, although a more generous SAT, set to start this spring, allots 83 seconds. Given the ticking clock, the tests openly advise swift skimming and blind guessing. Hence this advice from Axiom Learning, a coaching company: “It’s Not What You Know, It’s How Fast You Can Show It.”

I next conferred with Jonathan Chiu, who oversees Princeton Review’s tutorial services. He began by saying that he warns girls not to double-check their answers, because that wastes crucial seconds. Girls tend to “overanalyze” the options, he added, while boys cotton to the idea that there is “only one right answer.” The ACT and the SAT concede that it’s not possible to truly solve all of their problems in the allotted time. So along with speed, there’s what some coaches call “stabbing,” which can yield precious points. Suppose you know the bell is about to ring, and you have 10 items still to go. Chiu recommends that you not even read them, but simply stab a bubble for each one. He says that girls are more apt to feel it’s not honest to fill in answers if you haven’t done the questions. A venerable College Board study found they were 12 times more likely to leave the bubbles blank because they weren’t sure. Chiu notes that too many girls enter the tests feeling their knowledge is being weighed, while boys perceive them as contests to be gamed. The keys to a successful score are an impulsive pace, brazen confidence, and a cynical view of the entire enterprise.

* * *

Let us consider one outcome of these 
tests. Each year, the National Merit Scholarship Corporation induces some 1.6 million high-school juniors to vie for its 7,400 awards. It purports to be a national talent search, funded by companies like McDonald’s, Boeing, and Lorillard Tobacco, eager to show a social commitment. While NMS releases reams of data, it steadfastly refuses to provide gender breakdowns, either for its initial pool of entrants or the final winners. When I asked for a few figures, an NMS spokesperson replied that the company didn’t keep them because gender “is not used in the selection process.”

So I did some digging of my own. NMS awards are based almost entirely on the PSAT, an abridged version of the SAT. In recent years, girls have comprised 53 percent of those taking this test. (NMS never mentions this figure.) The PSAT does release its ranges of scores, where its three parts—reading, writing, and mathematics—get equal weight. In fact, the genders are just a point or so apart in reading and writing. But the difference in mathematics is striking, with twice as many boys landing in the top tier. This edge boosts them overall, and it seems valid to surmise that discernibly more boys will be getting NMS scholarships. (In fact, if we had reading and writing results that mirrored classroom accomplishment, girls’ scores would be substantially higher than the boys.)

NMS also declines to print a list of its ultimate winners. However, it does release the names of each state’s “semifinalists,” the penultimate draw. I chose Ohio as a sample state and examined its 626 names to identify them by gender. (Some of the names were androgynous or unfamiliar to me, so I split them evenly.) I found that girls comprised 47 percent of Ohio’s NMS semifinalists. Here, too, it was the standardized mathematics scores that brought girls, who started as 53 percent of the entrants, down to 47 percent of the NMS awardees. Here, the PSAT’s gender bias results in more boys than girls receiving national recognition, not to mention money for college.

Consider another outcome of biased testing: More men than women are admitted to top-tier schools, even though 57 percent of the bachelor’s degrees awarded nationwide go to women. At Stanford and Yale, for example, less than half of their undergraduates are women. Here’s the reason: These elite colleges demand that most of the students they admit have SAT scores of at least 700 (or above 33 on the ACT) on both the reading and, more decisively, the mathematics segment. What Yale, Stanford, and others know is that women make up only 38 percent of the SAT’s 700-plus mathematics pool and 34 percent of the ACT’s 33-plus circle. As a result, more men are routinely deemed to have the dossier these colleges seek. Might these colleges be worried about their public image if women began to outnumber men on their campuses, producing a large gender imbalance?

So what’s to be done? Machine-graded testing is so entrenched that about all we get is tinkering. (SAT items now have four choices instead of five.) In the past, questions involving the torque of racing cars were deemed sexually biased. It’s hard to find anything slanted quite so obviously today. If more mathematics problems can be attuned to today’s girls and women, there should be efforts to include them. But we shouldn’t delude ourselves that female-friendly wording will turn the tide.

The generally accepted antidote follows Henry Higgins’s plea (here faintly amended) in My Fair Lady: “Why Can’t Women Be More Like Men?” This is a patent premise in coaching courses. Kaplan has even produced a special “Study Guide for Girls.” Essentially, they’re told to forget what got them A’s in their mathematics classes and urged instead to deliberate less on questions, answer even if they don’t know, and tackle the test as a game to be outwitted.

Is that what we want? If anything, I would have supposed we want to encourage young people—nascent adults—to be thoughtful. And that entails taking your time, not taking shortcuts. But the real charge against our testing imperium is how it blatantly slights the talents of half our society, just when girls and women are revealing abilities that match or surpass those of boys and men. That they are denied their share of seats at selective schools and colleges, and of corporate-sponsored scholarships, should be broadly known and reproached. Setting 83 seconds for advanced algebra problems as the key to attending Yale is to sustain yet another ceiling 
for women.

Ad Policy
x