Paper Summary
Share...

Direct link:

Trans-semiotizing written scientific reasoning in biology for multimodal learning and teaching: A prompt engineering approach

Wed, April 23, 2:30 to 4:00pm MDT (2:30 to 4:00pm MDT), The Colorado Convention Center, Floor: Meeting Room Level, Room 113

Abstract

In science education, multimodal representations involving texts, images, and videos are increasingly integrated into the instruction of subject content. Multimodality has become the new norm with its well-researched benefits for students’ development of science literacies. However, the assessment of science learning remains predominantly monomodal and focused on written output, creating a discontinuity with the multimodal nature of learning and teaching. While science writing provides an accessible channel for summative evaluation, how students cognitively develop their science literacies largely remains unknown. This leaves a significant gap for instructors to scaffold students’ learning within their zone of proximal development.

In this study, we employed Lemke’s (1988) Thematic Patterns Theory (TPT) as a theoretical framework to understand the construction of scientific reasoning in biology. TPT, a social semiotic approach, views the formation of scientific concepts as the patterned deployment of meanings and meaning relations. Thus, learning science involves understanding thematic items (i.e., science concepts) and their semantic relationships (i.e., the linguistic interactions between concepts) (Author, 2022). To make TPT more accessible in educational contexts, we designed a set of prompts to assist Large Language Models (LLMs; in our case, ChatGPT-4) in identifying and visualizing thematic patterns in biology writing. A prompt represents the channel of interaction between individuals and LLMs, and its quality directly influences communication effectiveness and outcomes. We used prompt engineering as the method for prompt development and evaluation.

To enhance ChatGPT’s understanding and output of semantic relations in science writing, we leveraged the capabilities of ConceptNet 5.5 (Speer et al., 2017), a semantic network containing structured representation of general knowledge and indicating the semantic relations between concepts. To ensure the accuracy and completeness of the text corpus, we retrieved writing from the biology textbook, “Biological Science” (Freeman et al., 2018). Biology concepts and their semantic relations were extracted using ConceptNet 5.5 and integrated into the initial prompts for ChatGPT. For example, we created prompts that directed ChatGPT to analyze a given text from the corpus and generate a graph showing key concepts and their semantic relations, considering the relationships from ConceptNet. This ensured that the generated graphs reflected the contextual richness provided by ConceptNet, which improved the overall quality and accuracy of the visualizations. Through iterative generation and refinement, a set of initial prompts was developed and tested in a pilot study involving student writings (N = 107) for a biology task concerning water intoxication. Accuracy and internal consistency were calculated. The prompts were then reviewed by two university biology professors and refined based on their feedback and user experience.

The finalized prompts helped visualize scientific reasoning and provided clear, effective guidelines for thematic pattern analysis, thus transforming monomodal assessment into multimodal, cognitive learning and teaching resources. This study also highlights the potential of prompt engineering as a robust tool for enhancing the understanding and analysis of discipline-specific content language features in educational contexts.

Authors