Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Division
Browse By Session Type
Search Tips
Personal Schedule
Sign In
Communication researchers are gaining access to an expanding range of large-scale data sets (i.e., sources of big data) related to human beliefs and behavior in a wide range of situational contexts (including online browsing activity, consumer preferences, and political behavior). For those of us whose objects of analysis include culture – i.e., the shared beliefs, values and customs of certain populations – the use of big data sets presents an exciting new methodological frontier. Private companies have been using algorithms to interpret (and predict) beliefs and behaviors for commercial purposes, but academic researchers have largely relied on more traditional modes of interpreting cultural variables (such as public opinion polling, surveys, and key informant interviews). While online news organizations rely on on algorithms to inform targeted advertising or content marketing strategies, algorithmic inferences and interpretations could be attractive to communication researchers seeking to dissect and describe various cultures of news consumption, sketch comparative typologies of journalism cultures, and develop theoretical explanations of levels of trust in the news. Yet these promises come with perils, such as open questions around empirical validity of data collected for purposes ulterior to research insights.This study will survey traditional field researchers, as well as data and information scientists to (1) provide an overview of prevailing research practices around the use of large-scale data sources to make conclusions about attitudes and public beliefs about media, (2) determine unique challenges confronting researchers as they attempt to access and interpret this data, and (3) attempt to summarize some of the best practices for doing this type of culturally-focused big data-driven research.