Misinformation Is Destroying Our Country. Can Anything Rein It In?

Misinformation Is Destroying Our Country. Can Anything Rein It In?

Misinformation Is Destroying Our Country. Can Anything Rein It In?

Trump is gone, but the right-wing media is alive and well—and will further undermine our democracy if we let it. 

Copy Link
Facebook
X (Twitter)
Bluesky
Pocket
Email

On the night after the November 3 election, hundreds of supporters of Donald Trump filled the parking lot of an election center in Arizona’s Maricopa County, where officials were still counting ballots. Although most networks had yet to call the state for Joe Biden, Fox News had declared him the projected winner shortly before midnight on election night. The Trump voters in Phoenix were furious. “Count the votes,” they chanted. “Fox News sucks!”

To Trump and his supporters, the network’s call in Arizona was a rare betrayal: In the previous four years Fox had become so enmeshed with the Trump administration that it functioned effectively as a mouthpiece for the White House. But Trump loyalists could easily confirm their belief that the election had been stolen from the president elsewhere, in more rabidly partisan outlets like Newsmax TV and One America News Network, which declared Fox a “Democrat Party hack” in the wake of its Arizona call. A whole web of right-wing influencers and outlets amplified conspiracies about ballots stuffed in suitcases and counterfeit mail-in votes.

At the Maricopa County rally, some protesters carried long guns. Others waved Sharpie pens, in reference to a rumor that had spread wildly online in the preceding hours and quickly became known as “Sharpiegate.” Sharpiegate was based on an unfounded claim that the votes of Trump supporters who’d been given Sharpies to fill out their ballots would be disqualified. The conspiracy theory appeared to originate with a conservative radio host in Chicago, who tweeted early on Election Day about felt-tip pens bleeding through ballots. Within a few hours, right-wing communities online were discussing claims that scanners couldn’t read Sharpie-marked ballots.

According to the Election Integrity Project, a partnership among several research institutions tracking misinformation throughout the election, the Sharpie conspiracy quickly became a popular explanation for why Biden was pulling ahead in swing states like Arizona. “Poll workers in Maricopa County AZ were handing out sharpie markers knowing full damn well that the machines register ONLY ink ballots. FRAUD IN ARIZONA,” read one tweet. A video featuring two women claiming that poll workers in Maricopa tried to force voters to use Sharpies went viral on Facebook, then spread quickly on YouTube, Twitter, Rumble, TikTok, Parler, and Reddit. Conservative influencers and media shared the conspiracy theory, as did Donald Trump’s son Eric and Arizona Representative Paul Gosar, who tweeted on November 4 that he’d asked the state attorney general to look into the claims.

As with other viral claims about election fraud in 2020, there was no evidence for Sharpiegate. The election was one of the most secure in history. Yet over two-thirds of Republicans believed that Biden’s win was illegitimate, according to a January poll.

Natali Fierros Bock says she could feel this mass delusion calcifying in the wake of the election in Pinal County, a rural area between Phoenix and Tucson where she serves as co–executive director of the group Rural Arizona Engagement. “It feels like an existential crisis,” Bock adds. Many of the Sharpiegate claims online referred to Pinal County, and Gosar, whose district includes a portion of the area, was reportedly responsible for helping organize the January 6 “Stop the Steal” rally in Washington that resulted in the deaths of five people. Mark Finchem, a Republican who represents part of Pinal County in the statehouse, was also in Washington on January 6.

The Capitol insurrection threw into relief the real-world consequences of America’s increasingly siloed media ecosystem, which is characterized on the right by an expanding web of outlets and platforms willing to entertain an alternative version of reality. Social media companies, confronted with their role in spreading misinformation, scrambled to implement reforms. But right-wing misinformation is not just a technological problem, and it is far from being fixed. Any hope that the events of January 6 might provoke a reckoning within conservative media and the Republican Party has by now evaporated. The GOP remains eager to weaponize misinformation, not only to win elections but also to advance its policy agenda.

A prime example is the aggressive effort under way in a number of states to restrict access to the ballot. In Arizona, Republicans have introduced nearly two dozen bills that would make it more difficult to vote, with the big lie about election fraud as a pretext. “When you can sell somebody the idea that their elections were stolen, they’ve been violated, right? So then you need protection,” Bock says, explaining the conservative justification for the suite of new restrictions in her state. Voting rights is her organization’s “number one concern” at the moment. But Bock’s fears about political misinformation are more sweeping. Community organizing is difficult in the best of times. “But when you can’t agree on what is true and not true, when my reality doesn’t match the reality of the person I’m speaking to, it makes it more difficult to find common ground,” she says. “If we can’t agree on a common truth, if we can’t find a starting place, then how does it end?”

Around the time of the 2016 election, Kate Starbird, a professor at the University of Washington who studies misinformation during crises, noticed that more and more social media users were incorporating markers of political identity into their online personas—hashtags and memes and other signifiers of their ideological alignment. In the footage from the Capitol she saw the same symbols, outfits, and flags as those she’d been watching spread in far-right communities online. “To see those caricatures come alive in this violent riot or insurrection, whatever you want to call it, was horrifying, but it was all very recognizable for me,” Starbird says. “There was a time in which we were like, ‘Oh, those are bots, those aren’t real people,’ or ‘That’s someone play-acting,’ or ‘We’re putting on our online persona and that doesn’t really reflect who we are in an offline sense.’ January 6 pretty much disabused us of that notion.”

It was a particularly rude awakening for social media companies, which had long been reluctant to respond to the misinformation that flourished on their platforms, treating it as an issue of speech that could be divorced from real-world consequences. Facebook, Twitter, and other platforms had made some changes in anticipation of a contested election, announcing plans to label or remove content delegitimizing election results, for instance. Facebook blocked new campaign ads for the week leading up to the election; Twitter labeled hundreds of thousands of misleading tweets with fact-checking notes. Yet wild claims about election fraud spread virally anyway, ping-ponging from individual social media users to right-wing influencers and media.

During the 2016 campaign, most public concern about misinformation centered on shadowy foreign actors posing as news sources or US citizens. This turned out to be an oversimplification, though many on the center and left offered it as an explanation for Hillary Clinton’s defeat in 2016; blaming Russian state actors alone ignored factors like sexism, missteps made by the Clinton campaign itself, and the home-grown feedback loop of right-wing media. In 2020, according to research done by Starbird and other contributors to the Election Integrity Project, those most influential in disseminating misinformation were largely verified, “blue check” social media users who were authentic, in the sense that they were who they said they were—Donald Trump, for example, and his adult sons.

Another key aspect in the creation of the big lie was what Starbird calls “participatory disinformation.” Trump was tweeting about the election being stolen from him months beforehand, but once voting got under way, “what we see is that he kind of relies on the crowd, the audiences, to create the evidence to fit the frame,” Starbird explains. Individuals posted their personal experiences online, which were shared by more influential accounts and eventually featured in media stories that placed the anecdotes within the broader narrative of a stolen election. Some of the anecdotes that fueled Sharpiegate came from people who used a felt-tip pen to vote in person, then saw online that their vote had been canceled—though the “canceled” vote actually referred to mail-in ballots that voters had requested before deciding to vote in person. “It’s a really powerful kind of propaganda, because the people that were helping to create these narratives really did think they were experiencing fraud,” Starbird says. Action by content moderators usually came too late and was complicated by the fact that many claims of disenfranchisement by individual users were difficult to verify or disprove.

The Capitol riot led the tech giants to take more aggressive action against Trump and other peddlers of misinformation. Twitter and Facebook kicked Trump off their platforms and shut down tens of thousands of accounts and pages. Facebook clamped down on some of its groups, which the company’s own data scientists had previously warned were incubating misinformation and “enthusiastic calls for violence,” according to an internal presentation. Google and Apple booted Parler, a social media site used primarily by the far right, from their app stores, and Amazon stopped hosting Parler’s data on its cloud infrastructure system, forcing it temporarily offline.

But these measures were largely reactions to harm already done. “Moderation doesn’t reduce the demand for [misleading] content, and demand for that content has grown during some periods of time when the platforms weren’t moderating or weren’t addressing some of the more egregious ways their tools were abused,” says Renée DiResta, technical research manager at the Stanford Internet Observatory.

Deplatforming individuals or denying service to companies that tolerate violent rhetoric, as Amazon did with Parler, can have an impact, particularly in the short term and when done at scale. It reduces the reach of influential liars and can make it more difficult for “alt-tech” apps to operate. A notorious example of deplatforming involved Alex Jones, the conspiracy theorist behind the site Infowars. Jones was kicked off Apple, Facebook, YouTube, and Spotify in 2018 for his repeated endorsement of violence. He lost nearly 2.5 million subscribers on YouTube alone, and in the three weeks after his accounts were cut off, Infowars’ daily average visits dropped from close to 1.4 million to 715,000.

But Jones didn’t disappear—he migrated to Parler, Gab, and other alt-tech platforms, and he spoke at a rally in Washington the night before the Capitol attack. One outcome of unplugging Trump and other right-wing influencers has been a surge of interest in those alternative social media platforms, where more dangerous echo chambers can form and, in encrypted spaces, be more difficult to monitor. “Isn’t this just going to make the extreme communities worse? Yes,” says Ethan Zuckerman, founder of the Institute for Digital Public Infrastructure at the University of Massachusetts at Amherst. “But we’re already headed there, and at least the good news is that [extremists] aren’t going to be recruiting in these mainstream spaces.”

The bad news, in Zuckerman’s view, is that the far right is now leading the effort to create new forms of online community. “The Nazis right now have an incentive to build alternative distributed media, and the rest of us are behind, because we don’t have the incentive to do it,” Zuckerman explains. He argues that a digital infrastructure that is smaller, distributed, and not-for-profit is the path to a better Internet. “And my real deep fear is that we end up ceding the design of this way of building social networks to far-right extremists, because they are the ones who need these new spaces to discuss and organize.” In March, Trump spokesman Jason Miller said on Fox that the former president was likely to return to social media this spring “with his own platform.”

A more fundamental problem than Trump’s presence or absence on Twitter is the power that a single executive—Jack Dorsey, in the case of Twitter—has in making that decision. Social media companies have become so big that they have little fear of accountability in the form of competition. “To put it simply, companies that once were scrappy, underdog startups that challenged the status quo have become the kinds of monopolies we last saw in the era of oil barons and railroad tycoons,” concluded a recent report by the staff of the Democratic members of the House Judiciary Subcommittee on Antitrust.

For now, the reforms at Facebook and other companies remain largely superficial. The platforms are still based on algorithms that reward outrageous content and are still financed via the collection and sale of user data. Karen Hao of MIT Technology Review recently reported that a former Facebook AI researcher told her “his team conducted ‘study after study’ confirming the same basic idea: models that maximize engagement increase polarization.” Hao’s investigation concluded that Facebook leadership’s relentless pursuit of growth “repeatedly weakened or halted many initiatives meant to clean up misinformation on the platform.” The modest “break glass” measures Facebook took during the election in response to the swell of misinformation, which included tweaks to its ranking algorithm to emphasize news sources it considered “authoritative,” have already been reversed.

Tech companies could do more, as the election-time tweaks revealed. But they still “refuse to see misinformation as a core feature of their product,” says Joan Donovan, research director for the Shorenstein Center on Media, Politics and Public Policy at Harvard University. The problem of misinformation appears so vast “because that’s exactly what the technology allows.”

There are some signs of a growing appetite for regulation on Capitol Hill. Democrats have proposed reforms to Section 230 of the Communications Decency Act, which insulates tech companies from legal liability for content posted to their platforms, such as requiring more transparency about content moderation and opening platforms to lawsuits in limited circumstances when content causes real-world harm. (GOP critiques of Section 230, on the other hand, make the false argument that it allows platforms to discriminate against conservatives.) Another legislative tactic would focus on the algorithms that platforms use to amplify content, rather than on the content itself. A bill introduced by two House Democrats would make companies liable if their algorithms promote content linked to acts of violence.

Democratic lawmakers are also eyeing changes to antitrust law, while several antitrust lawsuits have been filed against Facebook and Google. But litigation could take years. Even breaking up Big Tech would leave intact its predatory business model. To address this, Zuckerman and other experts have called for a tax on targeted digital advertising. Such a tax would discourage targeted advertising, and the revenue could be used to fund public-service media.

Social media plays a key role in amplifying conspiracy theories and political misinformation, but it didn’t create them. “When we think of disinformation as something that appeared [only in the Trump era], and that we used to have this agreed-upon narrative of what was true and then social platforms came into the picture and now that’s all fragmented… that makes a lot of assumptions about the idea that everyone used to agree on what was true and what was false,” says Alice E. Marwick, an assistant professor at the University of North Carolina who studies social media and society. Politicians have long leveraged misinformation, particularly racist tropes. But it’s been made particularly potent not just by social media, Marwick argues, but by the right-wing media industry that profits from lies.

“The American online public sphere is a shambles because it was grafted onto a television and radio public sphere that was already deeply broken,” argue Yochai Benkler, Robert Faris, and Hal Roberts of Harvard’s Berkman Klein Center for Internet and Society in their book Network Propaganda. The collapse of local news left a vacuum that for many Americans has been filled by partisan outlets that, on the right, are characterized by blatant disregard for journalistic standards of sourcing and verification. This insulated world of right-wing outlets, which stretches from those that bill themselves as objective sources, Fox News chief among them, to talk radio and extreme sites like Infowars and The Gateway Pundit, “represents a radicalization of roughly a third of the American media system,” the authors write.

The conservative movement spent decades building this apparatus to peddle lies and fear along with miracle cures and pyramid schemes, and was so successful that Fox and other far-right outlets ended up in a tight two-step with the White House. Fox chairman Rupert Murdoch maintained a close relationship with Trump, as did Sean Hannity and former Fox News copresident Bill Shine, who became White House communications director in 2018.

The backlash against Fox in the wake of the election hinted at a possible dethroning of the ruler of the right’s media machine. Its farther-right rival Newsmax TV posted a higher rating than Fox for the first time ever in the month after the election, following supportive tweets from Trump, and during the week of November 9 it passed Breitbart as the most-visited conservative website. But Fox quickly regained its perch. The network backpedaled rapidly during its post-election ratings slump, firing an editor who’d defended the projection of a Biden win in Arizona and replacing news programming with opinion content. According to Media Matters, Fox News pushed the idea of a stolen election nearly 800 times in the two weeks after declaring Biden the winner. The network’s ad revenue increased 31 percent during the final quarter of 2020, while its parent company, Fox Corporation, saw a 17 percent jump in pretax profit.

The far-right media ecosystem has become so powerful in part because there’s been no downside to lying. Instead, the Trump administration demonstrated that there was a market opportunity in serving up misinformation that purports to back up what people want to believe. “In this day and age, people want something that tends to affirm their views and opinions,” Newsmax CEO Chris Ruddy told The New York Times’ Ben Smith in an interview published shortly after the election. Claims of a rigged election were “great for news,” he said in another interview. Trump’s departure from the White House won’t necessarily reduce the demand for this kind of content.

Since the Capitol riot, two voting-systems companies have launched an unusual effort to hold right-wing outlets and influencers accountable for some of the lies they’ve spread. Dominion Voting Systems, a major provider of voting technology, and another company called Smartmatic were the subjects of myriad outlandish claims related to election fraud, many of which were used in lawsuits filed by Trump’s campaign and were repeatedly broadcast on Fox, Newsmax TV, and OAN. Since January the companies have filed several defamation suits against Trump campaign lawyers Sidney Powell and Rudy Giuliani, MyPillow CEO Mike Lindell, and Fox News and three of its hosts. Dominion alleges that as a result of false accusations, its “founder and employees have been harassed and have received death threats, and Dominion has suffered unprecedented and irreparable harm.”

The threat of legal action forced a number of media companies to issue corrections for stories about supposed election meddling that mentioned Dominion. The conservative website American Thinker published a statement admitting its stories about Dominion were “completely false and have no basis in fact” and “rel[ied] on discredited sources who have peddled debunked theories.” OAN simply deleted all of the stories about Dominion from its website without comment.

These lawsuits will not dismantle the world of right-wing media, but they have prompted a more robust debate about how media and social media companies could be held liable for lies that turn lethal—and whether this type of legal action should be pursued, given the protections afforded by the First Amendment and the fact that the powerful often use libel law to bully journalists.

Ethan Zuckerman has been thinking about how to build a better Internet for years, a preoccupation not unrelated to the fact that, in the 1990s, he wrote the code that created pop-up ads. (“I’m sorry. Our intentions were good,” he wrote in 2014.) Still, he believes that framing misinformation as a problem of media and technology is myopic. “It’s very hard to conclude that this is purely an informational problem,” Zuckerman says. “It’s a power problem.”

The GOP is increasingly tolerant of, and even reliant on, weaponized misinformation. “We’re in a place where the Republican Party realizes that as much as 70 percent of their voters don’t believe that Biden was legitimately elected, and they are now deeply reluctant to contradict what their voters believe,” Zuckerman says. Republicans are reluctant, at least in part, because of a legitimate fear of primary challenges from the right, but also because they learned from Trump the power of using conspiracy theories to mobilize alienated voters by preying on their deep mistrust of public institutions.

It’s one thing for an ordinary citizen to retweet a false claim; it’s another for elected officials to legitimize conspiracy theories. But holding the GOP to account may prove to be even harder than reforming Big Tech. The radical grass roots have been empowered by small-dollar fundraising and gerrymandering, while more moderate Republicans are retiring or leaving the party. Writer Erick Trickey argued recently in The Washington Post that what undercut a similar wave of conservative crackpot paranoia driven by the John Birch Society in the 1960s was explicit denunciation by prominent conservatives like William Buckley and Ronald Reagan as well as Republican congressional leaders. But today’s party leaders have been unwilling to excommunicate conspiracy-mongers. In the aftermath of the Capitol riot, elected officials who spread rumors that the violence was actually the result of antifascists—including Arizona’s Paul Gosar and Andy Biggs—gained notoriety, while those critical of Trump were publicly humiliated.

The embrace of conspiratorial narratives has been particularly pronounced in state GOP organizations. The Texas GOP recently incorporated the QAnon slogan “We are the storm” into official publicity media, and the Oregon GOP’s executive committee endorsed the theory that the riot had been a “false flag” operation. In March, members of the Oregon GOP voted to replace its Trump-supporting chairman with a candidate even farther out on the extremist fringe.

Weaponized misinformation could have a lasting impact not only on the shape of the GOP but also on public policy. Republicans are now using the big lie to try to restrict voting rights in Arizona, Georgia, and dozens of other states. As of February 19, according to the Brennan Center for Justice, lawmakers in 43 states had introduced more than 250 bills restricting access to voting, “over seven times the number of restrictive bills as compared to roughly this time last year.” In late March, Georgia Governor Brian Kemp signed a 95-page bill making it harder to vote in that state in a number of ways.

Many of the far-right extremists, politicians, and media influencers who spread misinformation about the presidential election are now pushing falsehoods about Covid-19 vaccines. The rumors, which have spread on social media apps like Telegram that are frequented by QAnon adherents and militia groups, among others, range from standard anti-vax talking points to absurd claims that the vaccines are part of a secret plan hatched by Bill Gates to implant trackable microchips, or that they cause infertility or alter human DNA. Sidestepping the craziest conspiracies, prominent conservatives like Tucker Carlson and Wisconsin Senator Ron Johnson, who has become one of the GOP’s leading purveyors of misinformation, are casting doubt about vaccine safety under the pretense of “just asking questions.” Vaccine misinformation plays into the longstanding conservative effort to sow mistrust in government, and it appears to be having an effect: A third of Republicans now say they don’t want to get vaccinated.

These are the true costs of misinformation: deadly riots, policy changes that could disenfranchise legitimate voters, scores of preventable deaths. These translate into financial externalities: the additional expense of securing the Capitol, additional dollars devoted to the pandemic response. More abstract but no less real are the social costs: the parents lost down QAnon rabbit holes, the erosion of factual foundations that permit productive argument.

The problem with the far right’s universe of “alternative facts” is not that it’s hermetically sealed from the universe the rest of us live in. Rather, it’s that these universes cannot truly be separated. If we’ve learned anything in the past six months, it’s that epistemological distance doesn’t prevent collisions in the real world that can be lethal to individuals—and potentially ruinous for democratic systems.

Ad Policy
x