Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Session Type
Personal Schedule
Sign In
Access for All
Exhibit Hall
Hotels
WiFi
Search Tips
Generative AI has transformed everyday digital disclosure. Unlike the social media era, where users visibly curated identities before peers, AI interactions unfold through conversational interfaces that feel private, instrumental, and ephemeral. Yet these exchanges feed expansive infrastructures of data scraping, inference, model training, and retention that remain largely opaque. This paper asks whether the “privacy paradox”, the well-documented gap between expressed privacy concerns and continued disclosure, can be applied under these new conditions.
Drawing on socio-legal scholarship on liquid surveillance, consent-based governance, and informational power, I argue that generative AI reconfigures the terrain of disclosure. Privacy law continues to rely heavily on notice-and-consent frameworks that presume autonomous, informed decision-making despite profound asymmetries in knowledge and power. At the same time, algorithmic governance scholarship demonstrates how AI systems concentrate informational authority within institutional actors while obscuring decision-making processes behind technical opacity. In this context, what appears as paradox may instead reflect constrained agency within socio-legal infrastructures that normalize data extraction.
The paper is based on qualitative semi-structured interviews with 15-20 regular users of generative AI tools. Interviews explore how participants conceptualize privacy risks, consent, data retention, and model training, and whether they differentiate between public self-disclosure on social media and conversational engagement with AI systems. Preliminary findings suggest that many users frame AI as a “tool” rather than a platform, potentially attenuating privacy concern despite comparable or expanded data extraction practices. Others articulate uncertainty about model training and regulatory protections, revealing reliance on institutional trust.
By situating AI use within broader transformations in digital governance, this study contributes to socio-legal debates on consent, accountability, and the evolving boundaries between individual responsibility and institutional power in the age of artificial intelligence.