If democracy dies in darkness as the Washington Post motto attests, what happens to it in a house of mirrors?
We live in an age when a not insignificant number of people believe very significant things that are simply not true.
And many of these people vote.
Disinformation – information that is knowingly false and shared with malicious intent – has always lurked in the shadows of political campaigns. But it was generally the stuff of dirty tricksters limited by technology – anonymous handbills and phone-bank whispering campaigns. We now see elaborate, hard-to-detect fakes overwhelming the marketplace of ideas, fueled by partisans willing to pass them along, and by the powerful algorithms that amplify them while earning billions for tech companies.
While a Harvard Kennedy School of Government study found that “misinformation sharing is strongly correlated with right-leaning partisanship,” that same study found evidence of people on the left sharing sketchy content. And bad information is bad information, no matter which side it helps or hurts. Indeed, one could argue that bad information is its own goal, destroying confidence in our system by creating informational chaos.
But what can we do about it? This is a question on the minds of people who care about the future of democracy and free speech around the world. Everyone from major universities to Washington think tanks to bewildered media institutions trying to figure it out and cover it fairly. A survey of the latest thinking leads to as many new questions as answers. And while many solutions are being tested and several have built solid reputations for fairness and accuracy, none has yet shown resounding success.
“Almost every democracy is under stress, independent of technology,” Darrell M. West, a senior fellow at the Brookings Institution think tank, told the New York Times. “When you add disinformation on top of that, it just creates many opportunities for mischief.”
Normally a good place to start would be to figure out who is behind the problem and attack from there. But in the limitless ether of the internet, finding the bad actors is not unlike sipping water from a firehose.
A joint warning issued before the 2024 election by the F.B.I. and the U.S. Cybersecurity and Infrastructure Security Agency singled out both Iranian and Russian bad actors, saying they were “knowingly disseminating false claims and narratives that seek to undermine the American people’s confidence in the security and legitimacy of the election process.”
The warning highlighted “cyber squatting” – creating websites with domain names that look familiar, like “washingtonpost.pm” and “fox-new.in.” Content is then created in the style of the actual news sites, in the hopes that users will consider it reliable and pass it along. The report listed hundreds of websites that have been flagged as suspect. But new websites can be created in seconds, from anywhere in the world.
Over-zealous partisans on either side of the political spectrum can be seen as bad actors. Among the fake campaign “news” cited in a Brookings Institution report were false images of Kamala Harris in a swimming suit hugging convicted sex offender Jeffrey Epstein, a fraudulent Fordham University transcript for Donald Trump claiming he had a barely passing 1.28 grade point average, an AI-generated video of a young man saying he was sexually abused by Tim Walz 30 years ago, and liberal conspiracy theories suggesting Trump engineered assassination attempts against himself.
Foreign actors, especially Russia, seem just as happy to create chaos – fostering an environment in which it is difficult to know who and what to believe, and therefore easy only to believe “facts” that support one’s worldview.
A second important question is: Why are people so eager to share information they haven’t verified? It’s a hard one to answer, given the similarly limitless supply of internet users – social media sites are used not only by real people, but by a constantly increasing number of so-called ‘bots.’ The Kennedy School study, authored by Dimitar Nikolov, Alessandro Flammini, and Filippo Menczer pointed to the highly complex make-up of the social media audiences.
“Not only are individuals and organizations hard to model, but even if we could explain individual actions, we would not be able to easily predict collective behaviors, such as the impact of a disinformation campaign, due to the large, complex, and dynamic networks of interactions enabled by social media.”
One thing the study could say, however, is that bad information tends to be shared by people on the extremes of the political spectrum, and that “false reports spread more virally than real news.” The authors blamed this on the fact that fake posts offer some novelty, and can be targeted to what information consumers want to believe.
Here perhaps it is important to remember the business model of internet search and social media. Though billed as benign tools and platforms, the large companies in this sector earn hundreds of billions annually by selling both access to, and data about, their users – users who willingly provide a virtually limitless amount of content and data for free.
Marketers love the fact that the rich data shared by users allows them to target likely customers with amazing accuracy. Whereas long-ago advertisers simply placed a message in a newspaper or broadcast and hoped a few readers or listeners might become customers, search and social media marketers can pick out very specific users from large audiences and deliver highly tailored messages across multiple platforms.
It is not the goal of these companies to provide accurate information. Their goal is to keep users online, constantly expanding the number of eyeballs they can sell to marketers. This is done by giving users what they want.
The Harvard study found “a correlation between political partisanship, echo chambers, and political vulnerability to misinformation.” Highly partisan users have very strong feelings about what they perceive is happening in the world. When they see what seems to be evidence of it, they seem to care more about sharing it quickly and widely than checking it and perhaps having their worldview shaken.
The noted historian, anthropologist and philosopher Yuval Noah Harari tackled the subject in his newest book, “Nexus,” by comparing the current climate to the days when people believed in witches.
“The history of print and witch-hunting indicates that an unregulated information market doesn’t necessarily lead people to identify and correct their errors, because it may well prioritize outrage over truth,” he writes. “For truth to win, it is necessary to establish curation institutions that have the power to tilt the balance in favor of facts.”
If one considers that professional journalists are at least part of the “curation institutions,” consider that the total number of reporters and editors in the United States has dropped by more than 60 percent since 2008, according to a Georgetown University study. This drop correlates with a freefall in advertising revenue over the same time period, as ad dollars now pour into the coffers of Google, Meta, and other players.
Meanwhile, a 2024 Gallup Poll showed that about a third of Americans express “no trust at all” in the mainstream media.
The Harvard study also notes that the partisan divide of disinformation may be hampering efforts by social media platforms to reign in the sharing of false narratives. Because these screening mechanisms identify right-leaning content more often, the authors said, they are vulnerable to accusations of political censorship.
Consider NewsGuard, a tech site that aims to “combine human expertise and technology to provide data, analysis and journalism that helps enterprises and consumers identify reliable information online.” Started in 2018 and led by Court TV founder Steve Brill and former Wall Street Journal publisher Gordon Crovitz, the company provides “source reliability ratings” which it says are produced by “a team of analysts using apolitical journalistic criteria and a transparent process.”
The company, whose clients include AI companies, tech platforms, news aggregators, search engines, advertising agencies and AdTech platforms, says it has uncovered more than 30,000 instances of false narratives spreading online, collecting more than 6.9 million data points along the way. Although it enjoys a solid reputation for accuracy among journalists and neutral third parties, the site has come under fire as partisan from some of the outlets it has flagged.
Jonathan Turley, a law professor, commentator, and author of “The Indispensible Right: Free Speech in the Age of Rage,” argued in a column for The Hill that sites like NewsGuard “fit into a massive censorship system.”
NewsGuard, wants to be “the media version of Standard & Poor’s rating for financial institutions,” but that they use “subjective” judgements to create “nutrition labels” for consumers of information. “Of course, what Brill considers nutritious may not be the preferred diet of many in this country,” Turley wrote.
Politifact Editor-in-Chief Katie Sanders said she has been surprised by attacks on the very idea of fact-checking.
“What surprised us was that fact-checking became a bargaining chip in the (2024) election,” she said.
She noted that, at an event co-sponsored by Politifact and the National Association of Black Journalists, Trump “almost didn’t come on stage because of the fact that Politifact would be live fact-checking.”
“It was very controversial,” Sanders said. “And that’s funny because, like, it’s surely nothing new, right?”
A number of methods are being tried to fight the spread of bad information. One so-called “megastudy” involved more than 33,000 participants and tested a number of strategies. These included warnings, source credibility labeling, media literacy tips, pre-emptive fact-checking, and debunking of disinformation
The authors of the study called the results hopeful. But they actually helped users identify false information between 5 and 10 percent of the time. This, said the New York Times in a story that referenced the study, “pales in comparison to the enormous scale of digital misinformation.”
Sanders said donations and sponsorships, which fund Politifact, remain robust. And audience engagement is strong.
“It’s easy to see that we are meeting a need that is so important to people,” she said. “But it is hard to quantify that, and it’s also daunting to think about how we can appeal to consumers who are burned out on the news.”
“The facts do matter,” Sanders said. “But it’s taking more stamina than ever before to stand up for them, and to have that work be valued.”