Skip to main content

How Blatantly False Headlines Can Distort What We Believe In

New research highlights the necessity of stopping huge falsehoods during the presidential election cycle

Photograph of a reflection of Donald Trump on stage during a campaign watch party, he is standing at a podium with a sign that reads in all capitalized letters, "TRUMP," behind him are American flags. The reflection has blurred and distorted the image

Eva Marie Uzcategui/Bloomberg via Getty Images

Politicians have never been known for a strict adherence to truth. U.S. voters admit they know their representatives routinely lie to them: voters routinely assume that even their own party’s politicians are dishonest about two fifths of the time, according to a 2021 study.But in this election year, a larger-than-life candidate is openly distorting reality and challenging fact on an unprecedented scale. Mendacity in the style of former president Donald Trump—and the uncritical repetition of such blatant lies—can measurably chip away at our ability to assess the plausibility of other, unrelated news stories, according to a new preprint analysis currently awaiting peer review.

Repeatedly viewing obviously outlandish claims makes people more likely to believe more ambiguous-seeming ones, the behavioral and cognitive scientists behind the new study conclude. The team’s results deal primarily with people’s beliefs rather than their ability to detect fake news (after all, something can be hard to believe yet still true). But the researchers also looked at how increases in perceptions of believability influenced people’s overall view of the truth.

Study co-author Reed Orchinik of the Massachusetts Institute of Technology says he wanted to examine our judgements of plausibility because he’s interested in how “the environment in which people encounter news affects the way that that news is interpreted.” He and his colleagues conducted five experiments with nearly 5,500 participants in all in which they asked these individuals toread or evaluate news headlines. Across all the experiments, participants exposed to blatantly false claims were more likely to believe unrelated, more ambiguous falsehoods.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


“The general idea is that you expose people to a stream of content that is either highly plausible or highly implausible,” Orchinik says. Some of the experiments used headlines made up by the researchers, while others were taken from a variety of news outlets. In one experiment, participants were asked to rate the believability of 60 hypothetical news headlines—drawn from 300 fictious entries that the researchers designed to have what they deemed low, moderate or high levels of implausibility. Another experiment took a similar approach but used real headlines fact-checked by Snopes.com. In the latter experiment, participants had to decide whether headlines were true or false (one such false headline, referring to the COVID-causing virus SARS-CoV-2: “U.K. Pathologist Warns Spike Proteins will Cause All Men to Lose their Reproductive Capacity”). The third and fourth experiments gave participants a long list of real headlines to simply read and then a short list of additional headlines that they had to judge as true or false. These tasks were designed to see whether passively reading headlines, as social media users might do in real life, influenced their perceptions. The fifth experiment sought readers’ opinions of the plausibility of true and false headlines shown at random.

In each experiment, some participants were exposed to lots of outlandish content: they saw implausible or false headlines between 58 and 80 percent of the time. And the participants who were shown more highly implausible headlines were more likely to believe ones that were less implausible. This effect was seen whether the participants self-identified as liberal or conservative. Also, as belief in plausibility rose, so did belief in truth. “Another takeaway from this paper is that if you see a lot of highly implausible things, plausible things are also viewed as more true,” whether they are or not, Orchinik says. Put simply: big lies make little lies more convincing.

The study’s survey methods are “sufficiently rigorous,” says Lynnette Hui Xian Ng, a misinformation and disinformation researcher at Carnegie Mellon University. She notes, however, that the respondents were all U.S. residents, and the results may not generalize to the rest of the world. Orchinik, for his part, points out that the implications of his paper (which he emphasizes has not yet gone through peer review) also depend on the volume of moderately implausible information that exists online.

“It’s really difficult to say what the effects of misinformation in the 2024 election are going to be and what the implications of my own research for that are,” Orchinik says. Others are already very worried. Trump, who recently locked in his Republican presidential candidacy, has a startling history of prevarication. (He made more than 30,000 false or misleading statements as president, according to a Washington Post analysis.) Years of research show that fake news travels faster and farther than real news on social media. Although such platforms play an important role in how the public receives political information, some social media scholars fear that the companies that run them are giving up their responsibility to arbitrate political content—just when that’s needed most. Technology conglomerate Meta, rather than emphasizing legitimate discourse, is simply deprioritizing political posts: it won’t use its algorithms to “proactively recommend content about politics” on Instagram and Threads, the company said in a February 9 statement. Meta also said it’ll begin a similar approach to political content on Facebook “at a later date.”

It’s not just Meta changing how it treats political subjects: CEO of SpaceX and Tesla Elon Musk said he bought X, formerly Twitter, to promote free speech—which meant, among other things, overturning Trump’s nearly two-year-long ban from the platform, which had been put in place after he allegedly incited violence at the U.S. Capitol on January 6, 2021.

Social networks have long tried to duck involvement in political debate because the companies behind them know that any intervention is likely to annoy at least half their audience. Intervention also opens them up to the risk of being defined under the law as a “publisher”—an entity that has some influence over (and thus responsibility for) content shared on its websites. This is legally distinct from a “platform,” which simply passes along other people’s posts—and thus avoids legal responsibility for their content. As Ng puts it, “Meta’s announcement of not recommending political posts might be because they do not want to be the arbiter of truth—to decide and fact-check what is real or fake news.” (Meta does have a third-party fact-checking program, but it does not review direct claims from politicians.)

Platforms that try to avoid promoting political content in their algorithms risk relegating it to areas at the fringes of their website, such as private Facebook groups. Users may also couch opinions or debate in doublespeak to get around the political filters imposed by platforms such as Meta. “Now that the platform will not recommend political posts, people will obfuscate political ideas behind seemingly harmless posts to beat the censors,” says Ng, who adds that this has parallels with the way Chinese social media users rely on nicknames such as “Winnie the Pooh” to refer to China’s president, Xi Jinping. “This makes detection of political posts harder in [the] future, and once people learn the methods, they will construct similar posts for other domains,” Ng says.

Other problems arise when social media platforms don’t steer users away from misinformation and toward legitimate political content. “This leaves the door wide open for bad actors to push outlandishly implausible fake news stories in order to try and manipulate our perception of the world around us,” says Steven Buckley, a lecturer in media and communication at City, University of London. He says the new preprint study is “solid,” but he does note that the concept of “plausibility” itself is subjective. The paper’s researchers borrowed the untrue headline “Staring at Hard Times, Tucker Carlson May Be Forced to Sell Bow-Tie Collection” from a satirical online publication as an example of highly implausible news. Whether you think that’s improbable, though, may depend on your perceptions of strapped-for-cash celebrities.

You might not be falling for false information right now. But the new research suggests that mere exposure to it can influence future beliefs. “One of the major harms of fake news is that it fractures our society so that people end up living in their own bubbles of reality,” Buckley says. “This is precisely what nefarious political actors want.”

While Orchinik was wary of elaborating on what these new findings mean for an election cycle featuring Trump, he was willing to come to one conclusion about the role of social networks: “I feel pretty comfortable, from the paper, saying that curbing extreme falsehood is going to be helpful,” he says.