Paper Summary
Share...

Direct link:

The Use of AI in Research: Creating a Culture of Consent

Thu, April 9, 9:45 to 11:15am PDT (9:45 to 11:15am PDT), Westin Bonaventure, Floor: TBD, La Cienega

Abstract

When using public generative AI tools (e.g., Chat GPT, Claude), little is known about what happens once data are fed into the “black box” algorithms; we know what information we input into these AI models, but we do not know what happens to the information. Internal Review Boards require transparency about how data collected will be handled with human participants. Where will the data be stored, and for how long? Who will have access to the data? Have participants been informed of data handling procedures? Have they consented to provide their data within the context of this knowledge?

The current answer for using public generative AI tools in educational contexts is: “We are not sure.” This has implications for decisions about using AI in academic research and course design. The talk will 1) discuss the ethical challenges of offering informed consent when public AI tools are used in educational research; 2) describe how fostering a culture of informed consent with AI use can impact other ethical considerations in higher education research (e.g., collecting learning analytics data); and 3) discuss considerations for adapting or adopting informed consent procedures for the use of AI in research.

This presentation is grounded in 1) the principles and practices of responsible technology organizations (Center for Humane Technology, All Tech is Human); 2) research ethics and practices described in the Belmont Report and the Common Rule; 3) student-centered research practices; and 4) models of consent developed by leading national healthcare providers (e.g., the FRIES and CRISP models).

Research findings on students’ perceptions of generative AI tools will be used to situate the challenges in ethically implementing these tools in research and course design. Approaches to informed consent-based frameworks described in the Belmont Report and in human health fields will invite attendees to consider their applications to the use of AI in higher education scholarship and teaching.

Data sources include 1) original research on students’ perceptions of generative AI tools in online education; 2) resources for faculty to support the responsible implementation of generative AI tools in online courses; and 3) policies developed by other human-centered organizations (e.g., Center for Democracy and Technology).

This presentation will illuminate how the adoption of public generative AI tools in research and course design may lack “informed consent.” In the research context, recommendations for addressing challenges in providing informed consent and developing informed consent practices will be offered and discussed.

The presentation addresses the urgent need to develop consent-based practices for using generative AI tools in research and education. Examples of emerging practices and consideration for developing these practices at other institutions will help build a culture of consent-based approaches to AI research.

References

The Belmont Report (1979).
https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/Index.html

Center for Democracy and Technology. (2024, February 21). CDT generative AI usage
policy. Center for Democracy and Technology.
https://cdt.org/cdt-generative-ai-usage-policy/

Intimacy Directors and Coordinators. (2022, October 11). Defining consent: From
FRIES to CRISP! Intimacy Directors and Coordinators.
https://www.idcprofessionals.com/blog/defining-consent-from-fries-to-crisp

Author