Search
Browse By Day
Browse By Time
Browse By Person
Browse By Mini-Conference
Browse By Division
Browse By Session or Event Type
Browse Sessions by Fields of Interest
Browse Papers by Fields of Interest
Search Tips
Conference
Location
About APSA
Personal Schedule
Change Preferences / Time Zone
Sign In
A growing body of work in computational communication research and political science engages with measuring emotions in political speech and text. I argue that we need to account for both the content and sound of politicians' rhetoric to capture the full range of expressed emotions in political settings. Current approaches focus almost exclusively on analyzing either audio or transcribed speeches. Consequently, existing text-based methods struggle to accurately measure the arousal dimension of emotive expressions (Cochrane et al., 2022), while approaches that instead leverage the sound of political speech (e.g., Dietrich et al., 2018) ignore textual information. This paper proposes a multimodal method for quantifying emotive expressions in spoken political language that combines text with audio information in a deep learning framework. Applying this method to parliamentary speech, I demonstrate that accounting for the sound of political rhetoric enables measuring the intensity of emotive expressions more accurately and a better classification of discrete emotions such as anger and disgust. My findings have implications for the study of emotions in political rhetoric.