Paper Summary
Share...

Direct link:

Reimaging Psychology With and Against AI: Using ChatGPT as a Pedagogical Partner

Thu, April 9, 9:45 to 11:15am PDT (9:45 to 11:15am PDT), Los Angeles Convention Center, Floor: Level One, Petree D

Abstract

Aligned with the theme of Unforgetting Histories and Imagining Futures, this paper explores how generative AI can help us “unforget” historically excluded psychological frameworks as we move into an AI-driven educational future. Rather than relying on AI to automate or standardize educational practices, I ask how it might be mobilized toward educational repair, centering the theories, voices, and values long overlooked or silenced in psychology classrooms. My goal is to use AI not just as a grading tool or for content generation, but as a pedagogical partner that supports inclusive, culturally responsive, and justice-oriented teaching.

I approach AI use in the classroom through a lens shaped by critical digital pedagogy, decolonial thought, and culturally sustaining pedagogy. Critical digital pedagogy invites us to use technology as part of a larger conversation about centering student voice, agency, and equity in the classroom (Stommel, 2018). Decolonial thought pushes us to move away from deficit-based models rooted in white, Eurocentric, and heteronormative values, and instead embrace approaches that recognize multiple ways of knowing (Andreotti, 2011). Culturally sustaining pedagogy affirms that the cultural and linguistic practices students bring to the classroom are not barriers, but powerful assets for learning and connection (Paris, 2012). Together, these frameworks call for educational experiences where students critically engage with technology, explore with diverse perspectives, and feel seen, valued, and supported.

Using autoethnographic reflection as my mode of inquiry, I share the tensions and transformations that emerged while using AI in my course at a public urban university in New York City. My students are brilliant and hardworking, yet many navigate full-time jobs, caregiving responsibilities, and gaps in academic skills. Initially, I encouraged AI tools to help with reading comprehension, writing, and quizzes, but the feedback they received often felt generic and depersonalized. Students’ insights pushed me to ask: How can AI support growth without flattening their voices?

In response, I built and trained a custom AI chatbot for my Child Psychology class that aligned with the values guiding my pedagogy. This tool doesn’t rewrite student work but prompts students to reflect, revise, and deepen their understanding of the material. It also introduces students to often-excluded scholars like Martin Brokenleg, Na’im Akbar, Carola Suárez-Orozco, and Jin Li, whose work centers healing, cultural belonging, and immigrant experiences in contrast to dominant developmental theories. I also programmed the chatbot to give warm, accessible feedback and check in about multilingual writing support. The goal was not just technical help, but to encourage critical thinking, amplify marginalized perspectives, and offer affirming, personalized support.

Through this process, what began as a classroom-based innovation grew into a broader ethical inquiry at the intersection of AI, pedagogy, and equity. This work is not about using AI to fix students. It is about co-creating tools that genuinely support them. When we build technology grounded in our values, we create learning spaces that connect more deeply to students’ lived experiences. It is an invitation to imagine futures where AI supports rather than replaces meaningful and reflective learning.

Author