Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
Objectives
As qualitative researchers navigate the rapidly expanding landscape of GenAI, we often find ourselves trapped within a false binary that positions us as either uncritical AI apologists or technological doomsayers. Demonstrably, Marshall and Naff's (2024) survey of 101 qualitative researchers demonstrated this polarization, revealing that scholars embraced AI for transcription while rejecting it for analysis; a pattern that undermined researcher capacity for thoughtful methodological engagement. We theorized problematic archetypal relationships that qualitative researchers develop with AI tools and propose a more critical alternative, ‘Echosocial Relationships’ as a framework for productive human-AI engagement that preserves human agency while leveraging AI's pattern recognition capabilities.
Perspectives
We drew on Horton and Wohl's (1956) concept of parasocial relationships, extending it to human-AI interactions, and Roberts et al.'s (2024) warnings against anthropomorphizing AI despite fluent outputs. We then built our theoretical foundation on our own critical understandings of LLMs as sophisticated text prediction technology rather than sentient entities. Throughout this process, we prioritized Robbins' (2024) concept of ‘last mile work,’ the crucial space where algorithmic precision encounters human experience, as a framework for maintaining human interpretive authority.
Methods
Through systematic theoretical analysis of current research practices and critical examination of human-AI interactions in qualitative research contexts, we identified an operational efficiency saturation in how scholars approached AI integration. We conducted thematic analysis of methodological descriptions, examining how researchers consistently framed AI adoption as workflow optimization challenges, emphasizing speed gains and procedural efficiency while neglecting fundamental questions about researcher-AI relationships, theoretical foundations, and epistemological implications. We analyzed researchers' metaphorical language, underlying assumptions, and reported relational dynamics with AI systems to develop archetypal categories of problematic relationships.
Data Sources
We drew on recent qualitative research literature incorporating AI tools, including Hitch (2024) and Christou (2023), who each emphasized processing speed and consistency gains, and Cheligeer et al. (2022), who positioned AI performance against human analytical capabilities using metrics like processing speed and consistency scores. We argue that this approach transforms qualitative inquiry from interpretive processes embedded within rich contextual knowledge into bounded tasks with measurable outputs, representing a significant departure from common qualitative epistemological commitments.
Conclusions
We generated three problematic archetypes:
Deus Ex Machina relationships position AI as omniscient authority, with researchers treating outputs as objective truth and abdicating interpretive responsibility.
Summoned Specter relationships involve projecting consciousness through AI while maintaining illusions of human agency, actually outsourcing critical thinking to algorithmic processes.
Parasocial Partnership relationships develop pseudo-intimate connections with AI, anthropomorphizing sophisticated text prediction technology.
As an alternative, we created Echosocial Relationships to recognize AI tools as sophisticated mirrors reflecting thinking patterns through recursive training processes. We train AI as AI trains on us, creating feedback loops that enhance analytical thinking when approached critically and reflexively.
Significance
Our work directly advances the symposium's critical and anti-oppressive agenda by disrupting technological colonialism that privileges algorithmic processes over human wisdom and cultural knowledge. Through this framework, we offer concrete alternatives to problematic researcher-AI relationships while preserving human interpretive authority and the relational, justice-oriented foundations of qualitative inquiry.