Paper Summary
Share...

Direct link:

Postphenomenological Insights into Trainee-AI Interaction: Reimagining the Promise of Precision Education

Wed, April 8, 9:45 to 11:15am PDT (9:45 to 11:15am PDT), JW Marriott Los Angeles L.A. LIVE, Floor: 2nd Floor, Platinum D

Abstract

This paper examines how postphenomenology can serve as an alternative methodology for investigating human–AI interaction in medicine, offering an input-to-insights pathway within the framework of precision education. Drawing from an ongoing mixed-methods study of medical trainees' interactions with a large language model (LLM) during a time-constrained clinical case, we explore how a postphenomenological lens, which articulates human–technology relations beyond predefined instrumental uses (Rosenberger, 2023), can illuminate how medical trainees engage with LLMs not merely as tools, but as mediating agents that provoke ethical reflection and systemic awareness. These perceptual patterns, particularly when analyzed through a framework that intersects with care ethics (Mol, 2008), provide a qualitatively grounded alternative to conventional data analytics, advancing the goals of precision education beyond its promissory orientation toward value-based care (Desai et al., 2024; Kuch et al., 2020).

Our study draws on Mol’s logic of care (2008) to understand how clinical care is enacted. Mol frames care as a collective, materially mediated practice shaped by clinicians, patients, technologies, and institutions, emphasizing uncertainty, situatedness, and ongoing adjustment. This helps us examine how medical trainees engage with large language models (LLMs) as part of a broader ecology of care – where datafication, documentation, and technological mediation raise ethical tensions and invite reflection on what constitutes “good doctoring.” (Mol, 2006, p. 411).

We employ postphenomenology, coupled with experimental design, as our method of inquiry. Postphenomenology’s attention to micro-scale mediations of perception (Rosenberger & Verbeek, 2015) aligns with feminist STS calls (Suchman, 2023) to resist reifying AI as a fixed “thing,” and instead investigate how meanings stabilize through context and use. This analytic lens enables us to examine how certain patterns of using LLMs become normalized, while others remain latent or excluded.

Preliminary data include seven medical trainees (MS3/4 and PGY3–4), most of whom regularly use LLMs in clinical documentation. Participants completed a 20-minute diagnostic reasoning task using GPT-4o (with or without internet access), followed by Critical Incident Technique interviews. These interviews supported retrospective narration of key moments in their task engagement, analyzed through a postphenomenological lens that attended to perceptual, institutional, and ethical dimensions (Lim et al., 2025).

Three perceptual patterns are emerging: (1) trainees frequently anthropomorphize LLMs while expressing epistemic skepticism; (2) they experience friction around ownership of clinical decision-making; and (3) several use the encounter as a springboard to reflect on broader ethical issues, including data privacy, systemic trust, and documentation politics. One trainee’s vignette reveals how a moment of doubt sparked reflection on the physician’s role of “tinkering” within the datafication process – exposing institutional tensions around power, ownership, and care.

This study makes two key contributions. First, it demonstrates how perceptual and experiential inquiry can generate new forms of insight into LLM-mediated clinical reasoning, enriching the precision education cycle. Second, it identifies moments of doubt and hesitation as pedagogically generative, inviting ethics education that centers relational, reflective, and systemic engagement with care. The pathway of insights into intervention offered through perceptual and experiential inquiry can advance PE’s goal of value-based, care-centered health systems.

Authors