Search
Browse By Day
Browse By Person
Browse By Session Type
Browse By Research Area
Search Tips
Meeting Home Page
Personal Schedule
Change Preferences / Time Zone
Sign In
Social media platforms have gained a prominent role in spreading news and information in the Covid-19 crisis. While all media sources have been forced to contend with the spread of misinformation in this pandemic, social media platforms have varied in the speed and strength with which they combat misinformation and have often used the same tools to combat misinformation that they usually use: the AI-based algorithmic detection and deletion of problematic content. However, while this approach has certainly deleted many videos and may have diminished the effects of misinformation, it focuses excessively on the elements of the platform that these automated systems may most easily police while avoiding the others. This paper seeks to provide an overview of the use and consequences of these systems in the pandemic by examining the YouTube platform. YouTube has been one of the more proactive platforms in attempting to stem the tide of misinformation, and its use of automated systems have deleted many videos with supposedly misinforming content. Using a content analysis and close reading of videos from channels sampled throughout 2020, this paper will identify approaches purveyors of misinformation have taken to exploit these automated solutions and theorize the conceptual limitations of the automated, content-based approach YouTube and other social media companies are taking in addressing the spread of misinformation in this pandemic.