Search
Browse By Day
Browse By Time
Browse By Person
Browse By Mini-Conference
Browse By Division
Browse By Session or Event Type
Browse Sessions by Fields of Interest
Browse Papers by Fields of Interest
Search Tips
Conference
Location
About APSA
Personal Schedule
Change Preferences / Time Zone
Sign In
X (Twitter)
Exposure to news on social media, whether intentional or incidental, has the power to influence beliefs, which can in turn shape perceptions of reality, influence political agendas, and start social or political movements. While social media has been associated with increasing factual political knowledge, online misinformation has threatened both democracy and public health, leading some to deem the presence of an “infodemic” alongside the Covid pandemic. In response to this online media environment, researchers from across disciplines are studying the complex interplay between the exposure to, belief in, and sharing of true and false information online. However, recent work has yet to unify the measurement of the diffusion of misinformation (i.e. sharing and exposure) with measures of belief in misinformation. As a result, we are left without an estimate of the scale of belief in misinformation, which in turn limits our understanding of the impact of misinformation on social media.
Without the ability to measure belief in misinformation at scale, we are not able to fully assess the efficacy of interventions aimed at reducing the impact of misinformation on social media users. Recently, social media platforms have employed various strategies to limit the spread of misinformation, including labeling questionable articles with fact-check labels, making them more difficult to share, or simply reducing their visibility on users’ news feeds. Despite the rapid rise of these platform-level interventions, we lack a true understanding of how they might ultimately change belief in misinformation among a platform's user base. Recent work has provided insight into how interventions could reduce the likelihood that users share misinformation, but these insights stop short of understanding the effect on user belief. The studies that do attempt to measure the effect of interventions on belief in misinformation focus exclusively at the level of the individual and do not measure overall ecosystem effects, where a share can potentially expose thousands of other users. Therefore, measuring belief at scale is a key first step for assessing the full efficacy of interventions.
To fill this gap, we calculated a robust large-scale estimate of user exposure to and belief in top-trending news online. Focusing on 155 trending true and false news articles that were viewed by over 10 million unique Twitter users, we combine (a) large-scale Twitter data tracking the spread of these articles and (b) real-time surveys of ordinary Americans measuring how likely users are to believe these articles. Using this new approach, we show that the patterns of user exposure, sharing, and belief in misinformation are distinct from those of true news. Importantly, exposure to misinformation often does not predict belief in misinformation in the same way as true news: while true news is seen and believed by an ideologically balanced set of users, misinformation is seen by many across the ideological spectrum but believed by only those on one end of the political spectrum. Importantly, those who see both true and false news earliest after publication are also most likely to believe it, highlighting the fact that it only takes a few hours to quickly impact millions of users. By accounting for individual user characteristics, article veracity, and article slant, our approach reveals the divergent patterns of exposure, sharing, and belief in true and false news, and highlights how large-scale user belief cannot be inferred from patterns of user exposure alone.
Additionally, we use these data to conduct simulations of misinformation interventions and find that many interventions aimed at reducing the impact of misinformation were largely ineffective at preventing large-scale belief in misinformation. Since exposure and belief quickly builds in the first few hours of circulation on Twitter, our simulations showed that interventions can only have a substantial impact if they are implemented within a few hours of the URL first being tweeted. Research suggests that social media platforms could improve the effectiveness of interventions by combining multiple interventions, but our simulations still show that common interventions are highly time-sensitive. Thus, the largest gain in effectiveness could come from methods, such as machine learning or crowd-sourcing, that could offer faster turn-around than professional fact-checkers.