Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Division
Browse By Session Type
Session Submission Type: Panel
For many applications, researchers are interested in the sentiment (or tone, valence) expressed in a text. This can range from positive or negative product reviews to online harassment, expressed policy preferences, and the tone of election campaign coverage.
There are a number of ways to measure sentiment in communication science and computational linguistics, including manual expert coding, crowdsourcing, dictionary-based coding, and machine learning (e.g. Wiebe et al., 2004; Pang & Lee, 2008; Liu, 2012; Young and Soroka, 2012). Although good results have been reported with each of these methods, sentiment analysis remains a difficult task for a number of reasons: sentiment is inherently subjective; the language used to express sentiment is relatively diverse and sensitive to contextual factors such as topic, domain, and language; and in many cases a single text can express multiple sentiments about diverse targets.
This means that it is important to consider for what purpose and in which context a sentiment analysis tool was developed, both for dictionaries and machine learning models. If the context of an application is different from the context for which the tool was developed, it can be necessary to re-estimate the validity of the tools and possibly adapt it to the new context. Moreover, depending on the textual material and the research question studied, it can be crucial to analyse the sentiment target as well as the sentiment itself, i.e. to view sentiment as a relation between a source and a target, rather than as a single score.
These considerations imply that rather than a single sentiment analysis tool or method we may need to develop and validate multiple tools for different contexts and purposes, which makes the questions of good practices and support for tool development and validation more important than it is in fields where methods and tools are less context-sensitive.
This panel will bring together a number of experts in the field of sentiment analysis, showcasing various ways of automatically computing the sentiment of text with a focus on methods for selecting, developing, and validating sentiment analysis tools for different purposes and in different contexts.
Methodological Challenges in Estimating Tone: Application to News Coverage of the U.S. Economy - Pablo Barbera, U of Southern California; Jonathan Nagler, New York U; Ryan McMahon, Pennsylvania State U
Supervised Sentiment Analysis of Parliamentary Speeches and News Reports - Elena Sofie Rudkowsky, U of Vienna; Martin Haselmayer; Matthias Wastian, Technical U Vienna; Marcelo Jenny, U of Vienna; Stefan Emrich, Drahtwarenhandlung Vienna; Michael Sedlmair, U of Vienna
Using Crowdsourcing for Developing an Attributed Sentiment Analysis Tool - Wouter van Atteveldt; Antske Fokkens, VU U Amsterdam; Isa Maks, VU U Amsterdam; Kevin van Veenen, VU U Amsterdam; Mariken van der Velden, VU U Amsterdam
Distributed Sentiment Analysis of Real-Time Political Tweets - Carlos Arcila Calderon, Universidad de Salamanca; Miguel Vicente-Marino, U of Valladolid; Felix Ortega, U of Salamanca
Sentiment Analysis of Twitter Data of a Crisis: Supervised Machine Learning Method - Siyoung Chung, Singapore Management U; Jie Sheng Chua, Singapore Management U; Jin Cheon Na, Nanyang Technological U; Mark Chong