Paper Summary
Share...

Direct link:

Troubling the Platform: Concerns with AI for QDA

Fri, April 10, 1:45 to 3:15pm PDT (1:45 to 3:15pm PDT), InterContinental Los Angeles Downtown, Floor: 7th Floor, Hollywood Ballroom I

Abstract

Objectives
The journal article-length literature on AI platforms and their applications with qualitative data analysis (QDA) have emerged exponentially over the past two years. Topics range from AI’s contributions to thematic analysis (e.g., Christou, 2024; Naeem, Smith, & Thomas, 2025) to equity focused research (e.g., Jiang, Ko-Wong, & Valdovinos Gutierrez, 2025).
A segment of this literature, however (e.g., Friese, 2025; Morgan, 2023), proposes that coding, as a QDA method, may become obsolete due to AI’s ability to bypass coding altogether and rapidly produce patterned outcomes such as categories, themes, and assertions. But Saldaña (2025) questions this predictive proposal since its critics refer to coding in general rather than as an array of processes and options.

Perspectives
Focusing on prompt engineering for AI analyses, particularly when analytic terminology use remains inconsistent among qualitative methodologists, frames the critical perspective for this paper. Though thematic analysis is arguably the most employed analytic method in published qualitative research studies, there is definitional heterogeneity over what a theme is, especially in relationship to other analytic outcomes such as a category or an assertion (Saldaña, 2024). Researchers are cautioned not to assume that AI platforms already know the canon of QDA’s methods. Prompts must actively “teach” ChatGPT, for example, what a theme is and how it differs from a category.
Additionally, researchers of color have expressed concerns over AI’s “whitewashing” of results when data from non-white participants have been analyzed (Ozuem et al., 2025). This indicates the platform’s biases and limitations with research approaches such as intersectionality, action research, and critical inquiry.

Methods
The fifth edition of The Coding Manual for Qualitative Researchers (Saldaña, 2025) profiles 36 different coding methods along with recommendations for their applications with ChatGPT. Experiments with the platform suggest that it conducts more effective analyses with some coding methods more than others. And in a few instances, ChatGPT produces less than satisfactory results with more complex coding methods when compared to a human’s generated analyses. Corroboration between Saldaña’s coding work and ChatGPT’s outputs ranged from as high as 100% to as low as 0% depending on the specific coding method.

Data Sources
Thirty-six data samples were extracted from The Coding Manual’s fifth edition manuscript. Each one was first analyzed manually and then with the ChatGPT platform.

Conclusions
ChatGPT excels at descriptive coding and, to some extent, in vivo and emotion coding, for example; but lacks the capacity for accurate analyses using icon, values, and dramaturgical coding. Thus, qualitative methodologists should exercise caution dismissing coding outright in the AI era.
Researchers of every stripe are still in the exploratory and experimental stages of how AI can serve as a supplemental assistive resource for QDA. Yet to make sweeping assertions about these platforms as they are evolving—sometimes problematically—is premature.

Significance
Like fieldwork for qualitative inquiry, researchers should remain inductively flexible as the platforms continue to evolve, and apply not just epistemic authority over AI’s quantitatively-generated outputs, but also a troubling critical lens on the platform’s qualitatively-presented assumptions and sometimes questionable results.

Author