I didn’t have to read Bob Fitrakis’s wonderfully titled piece, “Joshua Holland, The Nation’s Truth Nazi, Needs to Calm Down.” I’ve become familiar with the genre since writing that there’s no reason to suspect that election fraud has been a factor in the Democratic primary results. Similar pieces have been published by Counterpunch, The Huffington Post (by Tim Robbins!) and a slew of other fringier outlets. They all stick to the same formula: After questioning my intellectual capacity, they claim or imply that I dismiss the real problems with our piss-poor election systems, and follow that with a heaping serving of nonsense about how exit polls supposedly reveal widespread fraud committed by the Clinton campaign.
I suppose it’s better to be a Truth Nazi than the regular kind. And since we’ll likely see a new round of this stuff following today’s primaries, let me respond to all of these pieces by acknowledging that we have an election infrastructure that would embarrass most banana republics. But when it comes to exit polling, essentially everything these articles claim is dead wrong.
The laziest iteration of these claims is that the exit polls have diverged significantly from the final vote tallies in many of the states Clinton won, and the same pattern isn’t evident in Republican contests. That’s simply untrue. The exit polls have been off in a couple of states, but for the most part they’ve fallen within the margin of error in both Republican and Democratic contests.
But the conspiracy-mongers aren’t really talking about exit polls. Their claims are based on obsessively parsing preliminary exit poll data that some media outlets publish when the polls close—the same data that political reporters always tell people to take with a big grain of salt because they’re notoriously inaccurate. (Most of their claims are based on the work of Richard Charnin, who runs a blog devoted to “JFK conspiracy and systemic election fraud analysis.” Charnin’s also a mathematician, as Tim Robbins notes, but, as we’ll see, his calculations aren’t the problem.)
The writers hyping this stuff claim that those preliminary data are “unadjusted,” and therefore offer a true barometer of voters’ responses as they leave their polling places. They say that the preliminary data are then adjusted to conform to the official results. In the hour or so between when the polls close and the final exit polls are released, they say, votes have consistently shifted away from Sanders, and this indicates that pollsters are covering up election fraud. (That last bit is often left implied, lest people consider how wide-ranging this plot must be.) And, central to the whole story, they say that looking at the way these data shift is a vital means of identifying potential fraud.
Every single part of that is 100 percent wrong.
Edison Media Research has conducted all of the exit polls for the major US media organizations since 2003. Joe Lenski, Edison’s executive vice president, talked to me about how exit polls are conducted. Two phone interviews revealed just how specious these claims really are.
Here’s how exit polling works: In most states, Edison conducts phone interviews before Election Day to capture absentee and early voting. Then, on Election Day, they send staff to between 15 and 50 polling places per state, and they ask between 500 and 3,000 voters to fill out questionnaires indicating which candidate they voted for and what issues are important to them. In order to account for those voters who refuse to fill out a questionnaire, exit pollsters have to adjust their survey data. Lenski says that about 50–60 percent refuse to participate. When someone says no, the pollster notes the person’s rough age, race, and gender. They then weight their data to match the population that voted at that location.
Some media outlets post preliminary data when the polls close—that’s the supposedly raw data that, according to the conspiracy-minded, reveal the fraud. But those data have already been merged with the results of those telephone interviews, and they have already been adjusted throughout the day (the interviewers send in their survey results in three waves). Unadjusted data are never released. (If you Google “exit polls adjusted New York,” you’ll get back dozens of posts claiming that the “unadjusted exit polls” varied significantly from the final results. All of those posts are dead wrong, as none of their authors have any idea what the unadjusted data looked like.)
For writers like Fitrakis, these adjustments are inherently sinister. But Lenski notes that all surveys, election-related or not, are adjusted to factual data. “Every telephone survey or online survey is weighted according to Census figures,” he explains, “or if it’s a pre-election survey they’ll weight the data to match the demographics on a voter registration list.” (See here for a detailed explanation of how and why pollsters have to adjust, or “weight,” their samples to get accurate results.)
When the polls close, pollsters don’t adjust the data to “match the official results.” They use the official results from the relatively small number of polling places where they conducted interviews to refine their sample. For example, if their model assumed that 30 percent of voters at a polling place would be black, and that number actually turns out to be 20 percent, or 40 percent, then they’ll weight the data accordingly. During this period, they’re also entering any surveys that were sent in late (again, this is all based on incomplete data).
These adjustments, and the inclusion of those late surveys, can account for significant shifts between the preliminary numbers posted when the polls close and the release of the final exit polls an hour or so later. But the important point here is that the final results are more accurate, not less so.
The big difference between exit polls and pre-election polls is that with the former, people can see how this statistical sausage is made in real time. That’s how these conspiracy theories gain steam.
Here’s another fatal flaw in the truthers’ logic: Edison developed its statistical models months before the vote, and long before there were any pre-election polls suggesting which candidate was likely to win a given race. (The models actually go back to the 1960s, but they were tailored for the 2016 race in the fall of 2015.) Conspiracy theorists would have you believe that a corrupt wizard is sitting behind the curtains, making decisions about how to weight the data as the results come in order to cover up election theft. The reality is that Edison has a bunch of statistical models that have been sitting on a computer since last fall, and they plug information into them as it comes in.
And here’s the last nail in the conspiracy-mongers’ coffin: While exit polls are used to detect potential fraud in some countries, ours aren’t designed, and aren’t accurate enough, to accomplish that purpose. Lenski, who has conducted exit polls in fragile democracies like Ukraine and Venezuela, explained that there are three crucial differences between their exit polls and our own. Polls designed to detect fraud rely on interviews with many more people at many more polling places, and they use very short questionnaires, often with just one or two questions, whereas ours usually have twenty or more. Shorter questionnaires lead to higher response rates. Higher response rates paired with larger samples result in much smaller margins of error. They’re far more precise. But it costs a lot more to conduct that kind of survey, and the media companies that sponsor our exit polls are only interested in providing fodder for pundits and TV talking heads. All they want to know is which groups came out to vote and why, so that’s what they pay for.
The sad truth is that we do have creaky, antiquated election infrastructure, voters don’t have a lot of faith in the system, and there’s really no good way to identify potential fraud. But wishing that our exit polls were up to the task doesn’t make them so, and parsing preliminary data posted on CNN.com is just a fool’s errand.