Paper Summary
Share...

Direct link:

Technological Reflexivity in Practice: Navigating Generative AI in Qualitative Research

Fri, April 10, 1:45 to 3:15pm PDT (1:45 to 3:15pm PDT), InterContinental Los Angeles Downtown, Floor: 7th Floor, Hollywood Ballroom I

Abstract

Objectives
The impact of artificial intelligence (AI) on qualitative research has long been discussed (Brent & Slusarz, 2003); however, the recent emergence of generative AI platforms has sparked renewed urgency to consider their methodological implications. Given how AI is now integrated into researchers’ daily lives, many—if not most—future qualitative studies will include its use. Thus, researchers must attend to both possibilities and limitations of AI-generated outputs (James et al., 2024). For instance, there is no escaping inherent biases embedded in training datasets—biases that may undermine commitments to justice and equity (Dahal, 2024; Marshall & Naff, 2024). These realities lead to a critical question: how might we, as qualitative researchers, navigate the methodological and ethical challenges posed by AI?

Perspectives
In this methodological paper, I suggest one useful response is technological reflexivity—a critical practice inviting researchers to work with and through AI biases rather than merely correcting them. Technological reflexivity is an iterative inquiry enabling researchers to contend with biases embedded in AI and examine how their epistemic, cultural, and methodological positions shape these systems. It insists AI systems are not just tools to fix but epistemic artifacts reflecting the values, assumptions, and positionalities of their creators and users.

Methods
To explore how researchers might enact technological reflexivity with AI, I draw on a postcritical ethnographic study of therapeutic interactions involving children with autism. The study highlights how psychologized constructs—such as the “abnormal bodymind”—are not inherent but discursively produced within therapy talk.

Data Sources
Over two years, I collected and analyzed observational fieldnotes and 175 hours of video-recorded therapy sessions in a pediatric intervention clinic. The study involved eight therapists, 14 caregivers, and 12 children diagnosed with autism.

Conclusions
Although this is a methodological paper, I include excerpts from my dataset to demonstrate how technological reflexivity can guide the critical engagement with AI tools in qualitative analysis. I argue that reflexively considering the limits of AI’s role is essential, especially when analyzing interactions where children’s bodies and behaviors are often constructed as problematic or abnormal. The approach I advance highlights the co-constitutive relationship between researcher and AI; that is, how AI’s outputs influence analytic decisions and how researchers’ prompts shape AI behavior. Importantly, recognizing AI’s sycophantic tendencies, where models mirror researcher assumptions (Ranadi & Pucci, 2023), reveals risks of perpetuating and amplifying biases if technological reflexivity is not rigorously applied to both AI systems and researcher positionality. Therefore, I suggest that technological reflexivity is vital for interrogating and navigating the limitations and biases embedded in AI-assisted qualitative research.

Significance
Building on critical AI scholarship (e.g., Lindgren, 2023), I position technological reflexivity as both a methodological imperative and critical stance. As a stance, it urges researchers to remain aware of how identities, theoretical orientations, and contexts shape engagements with AI. As a method, it enables more ethically grounded, critically engaged uses of AI—anchored in accountability, positionality, and ongoing negotiation of power.

Author