Browse By Day
Browse By Time
Browse By Person
Browse By Mini-Conference
Browse By Division
Browse By Session or Event Type
This project is fundamentally about assessing the quality, volume and content of news journalism on the social media landscape in the Arabic Twittersphere. Specifically, it examines the way in which Twitter bots are negatively impacting the access to high quality factual information necessary in promoting useful public debate on issues of governance and policy in Saudi Arabia.
According to Keller (2018), the public sphere might be especially vulnerable to impact of bots through their ability to manipulate online discussion. Indeed, bots are increasingly being used to distribute propaganda (Shorey and Howard, 2016). Bots themselves are not inherently malicious. At root they are software programs designed to execute commands, protocols or routine tasks, in this case on the internet. They exist online in enormous quantities, and are created for various reasons, including “news, marketing, spamming, spreading malicious content, and more recently political campaigning” (Gilani et al., 2017). However, growing artificial intelligence and automation demonstrate that software can be leveraged and exploited on an industrial scale, allowing for the automation of political communication and propaganda. A growing phenomenon under scrutiny has been the role of Twitter bots, often referred to in Gulf Arabic dialect as dhabāb iliktrūniya (electronic flies).
The potential impact for the public sphere is reflected in the number of bots. Research from Twitter (2014) and Chu et al. (2012) estimate that 5–10.5% of Twitter users were bot accounts. Previous studies have noted that even in small numbers, bots have a significant impact, including, for example, increasing the popularity of URLs (Gilani et al., 2017). Indeed, the dystopian and utopian framework common in studies of the internet has affected the nomenclature in bot research, with distinctions being made between good bots and more malicious ones. As Ferrara et al. (2014, p. 2) explain, harmful bots “mislead, exploit, and manipulate social media discourse with rumours, spam, malware, misinformation, slander, or even just noise”. According to Shorey and Howard (2016), social bots can attack activists and spread propaganda. Through the use of hashtag spamming and attempted trend creation (Gallagher, 2015), such bots are potentially harmful to civil society as they impinge upon free speech and distort the public sphere (Marechal, 2016). Bots can drown out legitimate debate and pollute conversations with malicious, extraneous or irrelevant information, a phenomenon previously documented in Mexico and Russia (Mowbray, 2014). As Marechal (2016, p. 5025) argues, “Hashtag spamming—the practice of affixing a specific hashtag to irrelevant content—renders the hashtag unusable. Artificial trends can bury real trends, thus keeping them off the public and media’s radar”. Gallagher (2015) describes malicious bots as “weaponized censors,” that can spam hashtags, intimidate opponents, issue death threats, and disseminate propaganda. There is growing research on the role of bots and how they may influence elections (Keller, 2018). Given their malicious behaviour and widespread use, identifying bots in order to develop ways to combat them is imperative in raising awareness of disinformation and finding ways to prevent it.
There are few studies on the Middle East on how bots are disseminating seemingly official news content in large quantities. The existing literature ranges from accounts promoting international satellite channels such as Saudi 24, to accounts targeting regional Saudi hashtags. Taken together it can be hypothesised that there has been an organized attempt to target specific geographic hashtags with large volumes of state-approved propaganda. This research seeks to utilise data gathered from regional and national hashtags, and analyze it to determine (a) the extent of automated journalism in the Arabic Twittersphere, (b) the potential impacts of such journalism, and (c) the nature of the content. In order to do this, content will be downloaded from Twitter’s streaming API. Bot detection techniques pioneered by this author will then be applied to the data sets to determine what proportion of accounts active on such hashtags are automated. Content analysis will then determine the overall nature of the text, e.g. whether it can be seen as independent news or apolitical stories that do little to evidence a lively public sphere. Messages by bots will then be compared to determine the diversity (or lack thereof) of public online Twitter journalism. This work contends that an emerging aspect of digital authoritarianism is the automated appropriation of the online public sphere, designed to give an illusion of civil society while ultimately creating a civil-society simulacrum. Additionally this paper introduces new, perhaps unforeseen problems facing Fraser’s (2007) analysis of parity and capacity in the transnational public sphere.