Search
Browse By Day
Browse By Time
Browse By Person
Browse By Policy Area
Browse By Session Type
Browse By Keyword
Program Calendar
Personal Schedule
Sign In
Search Tips
The move toward adding watermarks and meta-data to images holds promise for combatting the spread of political deepfakes. However, like traditional fact checks, the context and political framing of shared content may undermine their effectiveness. Therefore, we investigate the capacity of watermarks and labels to mitigate deepfake-driven misinformation in contexts that prime partisan identities. We conduct an online survey experiment using the platform OTree to simulate a social media platform environment. Using this platform, we are able to capture respondents' behavior as well as traditional survey outcome measures. Each respondent sees several authentic social media posts, along with a politically-relevant deepfake and random variation in whether and how this deepfake is watermarked/labeled. We ask respondents about whether content is AI-generated and whether events in posts actually happened, allowing us to measure the usefulness of different versions of watermarks and labels for combatting the harms of political deepfakes online.