Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Division
Browse By Session Type
Search Tips
Personal Schedule
Sign In
By personalized recommendations, algorithms influence the selection—and recep-tion—of content on YouTube. That is, for instance, through mutual tags or topics, harm-ful/violent/problematic content can be recommended together with harmless content and thereby dilute boundaries between distinct or even contradictory categories of videos. In par-ticular, in the realm of extremism and countering attempts, it seems necessary to ask to what extent these algorithms have an impact on the interrelatedness of counter-message and propa-ganda videos, and how they affect the likelihood for users of counter-message videos to come across videos with problematic content. By means of two exemplary information network analyses based on videos of two counter-message campaigns we demonstrated that that ex-tremist content may be closely—or even directly—connected with these videos. The results hint at the challenges of YouTube’s recommendation system for users and, therefore, the problematic role of automated algorithms in the context of counter-message campaigns.
Josephine Schmitt, IfKW / LMU Munich
Diana Rieger, University of Mannheim
Olivia Cornelia Rutkowski, U of Cologne
Julian Ernst, U of Cologne