Individual Submission Summary
Share...

Direct link:

Explaining Dysfunctional Information Sharing on WhatsApp and Facebook in Brazil

Thu, September 10, 10:00 to 11:30am MDT (10:00 to 11:30am MDT), TBA

Abstract

In the run-up to the 2018 Brazilian elections, false and misleading information was widely circulated through the mobile instant messaging service WhatsApp (First Draft, 2019). Researchers estimated that roughly half of all images circulating through the service were likely altered or distorted to convey false information (Tardáguila et al., 2018). Another study reported by The Guardian sampled WhatsApp messages prior to the election and found evidence of a politically right-leaning coordinated campaign to spread misinformation and bolster Jair Bolsonaro, who ultimately won the election (Avelar, 2019). Similar concerns have been raised in other democratic countries, including India and Indonesia, about the use of WhatsApp to spread false and misleading information in an effort to affect public opinion and alter election outcomes.

Research on the use of social media for political discussion has been primarily focused on social networking sites (SNSs), with less attention to mobile instant messaging apps, such as WhatsApp. Yet, while the use of Facebook is declining worldwide, the use of messaging apps is on the rise, according to Reuters Digital News Report 2018. The use of private messaging raises concerns around digital threats, as encrypted messaging applications pose challenges to identify and mitigate the spread of false information. In light of these concerns, this study contributes to the emerging literature on mobile messaging and misinformation by examining the use of WhatsApp in Brazil. With over 120 million users in 2017, WhatsApp is the second most popular social application in the country, only behind Facebook.

One important dimension of research is understanding who are more likely to share misinformation on these private messaging apps. In spite of concerns with coordinated disinformation efforts and propaganda (Jamieson, 2018; Wooley and Howard, 2018), regular users are responsible for spreading misinformation in their own networks — a behavior that Chadwick, Vaccari and O’Loughlin (2018) have described as "democratically dysfunctional news sharing". Although motivations for sharing news on social media may be varied, understanding the types of users and behaviors associated with dysfunctional news sharing is an important step towards devising strategies to combat the spread of misinformation online.

We adopt a comparative approach to examine dysfunctional sharing on WhatsApp and Facebook, as semi-public platforms have been scrutinized for facilitating the spread of misinformation because of algorithmic curation and an engagement-driven news feed (Guess et al., 2019). Considering the different affordances in these platforms, comparing misinformation sharing dynamics may help understand the differences between private and semi-public venues. We present survey data of a representative sample of internet users (N = 1,615) to examine dysfunctional sharing, examining both accidental misinformation sharing as well as intentional misinformation sharing, in which people recognize the information is incorrect and choose to share it anyway. Specifically, we investigate the relationship between dysfunctional sharing and 1) frequency of political talk; 2) cross-cutting exposure; 3) social corrections (experiencing, witnessing and performing).

Our findings provide further evidence of a participation vs. misinformation paradox: those who are more engaged in political talk are significantly more likely to have shared misinformation in the platform they use to discuss politics, and also significantly more likely to disinform — in the latter, the effects are cross-platform, as political talk on WhatsApp is associated with intentional misinformation sharing on Facebook and vice-versa. We also find that instead of tempering the spread of false information, exposure to cross-cutting political views is positively associated with both types of dysfunctional sharing. Those who share misinformation are more likely to experience a social correction and to witness other being corrected, suggesting that false information does not go unnoticed in peoples' networks. Finally, we find that people are significantly more likely to experience, perform, and witness social correction on WhatsApp than on Facebook, suggesting that the closer social ties maintained through WhatsApp might provide a sense of safety that supports these behaviors. Taken together, our results suggest that the intimate nature of WhatsApp communication has important consequences for the dynamics of misinformation sharing, particularly with regards to facilitating social corrections.

Authors