Paper Summary
Share...

Direct link:

Designing and Evaluating AI-Enhanced Interactive Reports with Teachers

Sun, April 27, 8:00 to 9:30am MDT (8:00 to 9:30am MDT), The Colorado Convention Center, Floor: Meeting Room Level, Room 103

Abstract

Objectives

Digital Personalized Assessments (DPAs) can provide teachers with information about students’ performance and their assessment experience (e.g., engagement levels based on timing data, response patterns associated with particular misconceptions, the use of accessibility supports, and the presence of technical issues). We explore how to design AI-enhanced interactive reports that can be used support teachers in making instructional decisions.

Theoretical Framework

Caring assessments provide a framework for studying DPAs (Authors, 2024; Authors, 2017). Research in the areas of interactive score reporting and open learner models provide insights on design principles to support understanding and appropriate use of assessment results (Bull & Kay, 2016; Kannan & Zapata-Rivera, 2022). For example, previous work on designing and evaluating interactive score reports for particular audiences involves considering the knowledge, needs and attitudes of the audience to design appropriate graphical representations, and supports for navigation and understanding of assessment results (Authors, 2014). As DPAs become commonplace, more opportunities are available to gather student process and response data to support decision making. We expect that this will increase interest on how DPAs are using these data to assess students’ knowledge, skills, and other attributes and how personalization features are made available to students in these environments. Supporting this type of transparency may also contribute to generating trust and increase adoption of DPAs. Work on open learner modeling offers guidance on how student model information can be made available to teachers and students to support teaching and learning goals (Authors, 2020). In addition, we build on previous work on AI conversations with teachers (Forsyth et al., 2017) and conversation-based assessments (Authors, 2024) to explore the use of AI-generated text that can provide teachers with insights/explanations about assessment results as well as potential next steps based on this information. We address the following research question: how to design AI-enhanced interactive reports for teachers using assessment results from DPAs.

Methods and Data Sources

AI-enhanced interactive reports mockups were developed and are currently used as data collection materials (See Figure 1 and Table 1) in semi-structured interviews with teachers (n=8). Questions focused on potential usability issues, appropriate use of assessment results, and teachers’ reactions to text generated by AI and humans. Resulting insights from teachers are used to refine the current version of our interactive report prototype that will be used in future design and evaluation iterations.

Results

We will present the AI-enhanced interactive report mockups developed including human and AI-generated text to support teaching interpretation of assessment results and discuss the findings from the semi structured interviews with teachers.

Significance

Advances in Generative AI can support appropriate interpretation of assessment results (e.g., by including detailed, narrative insights for learners, teachers, and other stakeholders). These insights can help these audiences understand assessment results and make informed decisions. The findings from the present research can inform the design of AI-enhanced interactive reports with teachers.

Authors