Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
Researchers using interview methods have always made use of emergent technologies. Advances in technology have enabled researchers to audio- and video-record interviews, conduct interviews using telephones (Trier-Bieniek, 2012) and online modalities such as email, text, and synchronous video (Salmons, 2014). Tools powered by artificial intelligence (AI) have automated transcription and translation services, enabling researchers to conduct interviews in other than native-languages and with people who are deaf and hard-of-hearing. Qualitative Data Analysis Software (QDAS) have integrated AI technologies to support searching, coding and reporting tools. AI tools also enable the generation and mining of large mixed-method data sets (Than et al., 2025). Arguments for the integration of AI tools in qualitative research include reductions in the costs of research, faster times to completion, greater accessibility to harder-to-reach populations, and the ability to expand the scale of research projects. This presentation discusses the tensions between saving time and money in pursuit of effectiveness with the priority on meaning-making with other humans that is at the heart of qualitative interviews.
AI technologies allow interviews with AI personas, dead, alive and simulated. Researchers have experimented with generating opinion data using large language models (“silicon sampling”) (Boelaert et al., 2025) and conducted in-depth interviews with simulated research participants (Kozlowski & Evans, 2025). Researchers can now create AI bots to interview humans, or AI agents to initiate autonomous actions, including conducting interviews. For example, the managers of the archive of Sir Michael Parkinson, a British talk-show host and journalist who died in 2023, have created a podcast show, Virtually Parkinson, in which a simulation of Parkinson conducts interviews with humans.
AI tools used in interview research assist in providing more people with access to information much faster than humans can. Yet, there is much debate about the objectivity of AI tools, since they are trained on data sets that may be biased. While some AI tools have been developed to recognize emotional states through nonverbal cues (e.g., virtual interviewers such as SimSensei Kiosk), research has demonstrated that AI interviewers do not always ask meaningful follow up question (Görer & Aydemir, 2024). Researchers have also found that AI simulations sometimes display “alien intelligence” (Kozlowski & Evans, 2025) and hallucinations are well documented. Less discussed by researchers are the ecological costs of using AI (e.g., de Vries, 2023).
Institutions are in an iterative process of developing and revising policies and procedures for using AI in research. Guidelines that are available tend to stress responsible and ethical use of AI, along with issues of maintenance of privacy, data security and potential problems of bias. We do not know where AI tools are headed, but given the appeal of the “new” coupled with competition to adopt the latest tools, there is no doubt that qualitative researchers using interviews will continue to adopt AI-tools and think about new ways to use those. How might qualitative interviewers do this ethically, while preserving the pursuit of meaning that is central to the work of qualitative research?