Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Division
Browse By Session Type
Search Tips
Personal Schedule
Sign In
Session Submission Type: Panel
It has been claimed that algorithmic personalization may limit users’ choices by trapping them in a filter bubble (Pariser, 2012), an environment in which recommendations limit future choices to familiar material. With forms of algorithmic recommendation increasingly widespread in search engines, social media platforms and even directly in news sites, the magnitude of this potential issue has grown in past years. A range of studies from a variety of fields has investigated potential filter bubbles, with different and in some cases conflicting results (Flaxman, Goel & Rao, 2016; Jürgens, Stark & Magin, 2015, Moeller et al 2016; Zuiderveen Borgesius et al, 2016). These studies have usually approached the issue on one of two levels: content and usage. Research on content has investigated the diversity of different sources, viewpoints and opinions within the items in a platform (local diversity), while research on usage (and users) more wholistically asks whether the breadth of different types of information that people consume online and offline makes it likely that users will be trapped in a filter bubble.
Based on these findings, a number of attempts have been made in past years to evaluate diversity in algorithmically personalized environments (Bakshy, Messing & Adamic, 2015; Hannack et al, 2013; Kulshrestha et al, 2017; Zeng et al, 2012). For example, Zeng et al contend that “though they are helpful in filtering information, recommendation algorithms may impose reinforcing influence on the system, by guidance to one’s choices which influences subsequent recommendations and hence choices of others” (2012, p. 18005). However, such academic considerations may carry little overall weight in relation to applications driven by economic arguments. Couldry and Turow (2014) claim that the application of algorithmic filtering methods used in other areas of the media industry to news is increasingly taking place and warn of adverse effects on public debate (p. 1722). Turow (2005) sees signs of a gradual shift in news production that go beyond allowing users to filter items on news sites, towards proactively producing content based on presumed user preferences. This may lead to imbalances where social signals such as item popularity influence user decisions in unforeseen ways (Hogg & Lerman, 2015).
This panel brings together current empirical work from media and communication studies and computer science that assesses diversity in relation to personalized online news both locally within a single platform and globally with usage patterns, to ask whether diversity is challenged or increased by algorithmic personalization. Methodologically, it combines the computational analysis of news items (paper 3), browsing behavior (papers 1 and 3) and search engine results (paper 2) with survey data (papers 1, 3 and 4), and focus groups (paper 4), thus seeking to advance computational approaches to measuring bias and diversity.
References (Please note: all references for the five papers are included here, rather than in the paper abstracts)
Bakshy, E., Messing, S., & Adamic, L. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 58(4), 707–731. https://doi.org/10.1126/science.aaa1160
Couldry, N., & Turow, J. (2014). Advertising, big data, and the clearance of the public realm: Marketers’ new approaches to the content subsidy. International Journal of Communication, 8, 1710–1726.
Epstein, R., Robertson, R.E.: The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections. Proc. of the National Academy of Sciences (PNAS) 112(33), E4512–E4521 (2015)
Flaxman, S., Goel, S., & Rao, J. M. (2016). Filter Bubbles, Echo Chambers, and Online News Consumption. Public Opinion Quarterly, 80(S1), 298–320. https://doi.org/10.1093/poq/nfw006
Hannak, A., Sapiezynski, P., Molavi Kakhki, A., Krishnamurthy, B., Lazer, D., Mislove, A., & Wilson, C. (2013). Measuring personalization of web search. In Proceedings of the 22nd international conference on World Wide Web - WWW ’13 (pp. 527–538). New York, New York, USA: ACM Press. https://doi.org/10.1145/2488388.2488435
Hogg, T., & Lerman, K. (2015). Disentangling the effects of social signals. Human Computation, 2(2), 189–208. https://doi.org/10.15346/hc.v2i2.4
Jürgens, P., Stark, B., & Magin, M. (2015). Messung von Personalisierung in computervermittelter Kommunikation. In A. Maireder, J. Ausserhofer, C. Schumann, & M. Taddicken (Eds.), Digitale Methoden in der Kommunikationswissenschaft (pp. 251–270). Berlin: DGPuK. https://doi.org/10.17174/dcr.v2.11
Kulshrestha, J., Eslami, M., Messias, J., Zafar, M. B., Ghosh, S., Gummadi, K. P., & Karahalios, K. (2017). Quantifying Search Bias. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing - CSCW ’17 (pp. 417–432). New York, New York, USA: ACM Press. https://doi.org/10.1145/2998181.2998321
Newman N., Fletcher R., Levy D., Nielsen R. (2016). The Reuters Institute digital news report 2016. Oxford, UK: Reuters Institute for the Study of Journalism.
Newman N., Fletcher R., Kalogeropoulos, A., Levy D., Nielsen R. (2017). The Reuters Institute digital news report 2017. Oxford, UK: Reuters Institute for the Study of Journalism.
Moeller, J., Trilling, D., Helberger, N., Irion, K., & De Vreese, C. (2016). Shrinking core? Exploring the differential agenda setting power of traditional and personalized news media. Info, 18(6), 26–41. https://doi.org/10.1108/info-05-2016-0020
Pariser, E. (2012). The filter bubble: How the new personalized web is changing what we read and how we think. New York: Penguin.
Turow, J. (2005). Audience Construction and Culture Production: Marketing Surveillance in the Digital Age. The ANNALS of the American Academy of Political and Social Science, 597(1), 103–121. https://doi.org/10.1177/0002716204270469
Zeng, A., Yeung, C. H., Shang, M., & Zhang, Y. (2012). The reinforcing influence of recommendations on global diversification. EPL (Europhysics Letters), 97, 18005–18011. https://doi.org/10.1209/0295-5075/97/18005
Zuiderveen Borgesius, F. J., Trilling, D., Möller, J., Bodó, B., de Vreese, C. H., & Helberger, N. (2016). Should we worry about filter bubbles? Internet Policy Review, 5(1), 1–16. https://doi.org/10.14763/2016.1.401
News Diversity Between Push and Pull, Online and Offline - Pascal Juergens, Johannes Gutenberg U Mainz; Birgit Stark, U of Mainz; Melanie Magin, Norwegian University of Science and Technology
Biased or Diverse? Measuring Personalization in Online Search Results - Juhi Kulshrestha, Hans Bredow Institute for Media Research; Cornelius Puschmann, Hans Bredow Institute for Media Research
Reading a Bit About Everything or Everything About a Bit? Assessing Online News Use Through Combined Survey and Tracking Data - Damian Trilling, U of Amsterdam; Judith Moeller, U of Amsterdam; Bob Robbert Nicolai van de Velde, University of Amsterdam; Claes H. De Vreese, U of Amsterdam, ASCOR
“But I Compensate With Print!” User Preferences and Concerns Regarding Algorithmic News Filtering - Lisa Merten, Leibniz Institute for Media Research | Hans Bredow Institut; Sascha Hoelig, Hans-Bredow-Institut
Promoting News Diversity: An Interdisciplinary Investigation Into Algorithmic Design, Personalization and the Public Interest - Glen Joris, Ghent University; Frederik de Grove, Ghent U; Véronique Hoste, Ghent U; Eva Lievens, U of Ghent; Luc Martens, Ghent U; Lieven De Marez, Ghent U