Paper Summary
Share...

Direct link:

Measuring Student Perceptions of Motivational Climate: Insights From a Think-Aloud Study

Fri, April 12, 7:45 to 9:15am, Pennsylvania Convention Center, Floor: Level 100, Room 115C

Abstract

Purpose and framework. Research and social-cognitive theories of motivation indicate that motivational teaching can support students’ motivation via their perceptions of that teaching (Linnenbrink-Garcia et al., 2016; Wagner et al., 2016). Researchers rely heavily on student perceptions to study contextual processes; however, theoretical understanding of student perceptions is limited (Robinson, 2023), and measures lack validity evidence (Karabenick et al., 2007; Wagner et al., 2013). Building on Motivational Climate Theory (Robinson, 2023), we used qualitative methods to understand what students think about when they respond to motivational climate measures, contributing foundational validity evidence of response processes (AERA et al., 2014) and articulating a path toward strong methodology for measuring motivation climate.

Method. Fourteen undergraduate students (57% women) from computer science, chemistry, and calculus courses were recruited using purposive sampling to maximize diversity of gender, ethnicity, and major. In think-aloud interviews (Ericsson & Simon, 1980), respondents were asked to narrate their thoughts while responding to 27 motivational climate items selected based on theoretically cross-cutting motivational design principles (Linnenbrink-Garcia et al., 2016). Students responded to items about the course they were recruited from. Following Karabenick et al. (2007), we rated whether student responses were valid via criteria that specified valid item interpretation, relevance of information recalled, and congruent answer choice. Qualitative descriptive analysis was used to explore patterns in student responses.

Results. Of the 27 items examined, students provided > 75% valid responses for 14 items (see example items in Table 1), 50-75% for 9 items, and < 50% for 4 items (see example items in Table 2). Responses rated as valid illuminated the behaviours that students viewed as indicative of the teaching approaches addressed (e.g., Table 1 shows findings related to creating a respectful environment). Some of the items elicited invalid responses for idiosyncratic reasons, such as students consistently ignoring part of a question or misinterpreting words, but there were also some patterns. For example, 4 items concerned instructor behaviours students felt were not feasible in large university lectures (one-on-one communication, considering preferences of all students). Thus, some students interpreted the questions broadly to include behaviours more suited to the context, while others did not, resulting in opposite answers based on the same reasoning and instructor behaviour. Other items elicited invalid responses because they asked about beliefs that students assumed all instructors would endorse (e.g., that students should not make fun of each other’s ideas), and students would agree even though they had no evidence. Table 2 shows examples of invalid responses.

Significance. Our results demonstrate a lack of evidence of response processes for existing instruments meant to measure motivation climate and provide guidance about how to improve item validity. In particular, the relevance of items can vary across classrooms contexts and different items may be needed. Additionally, our study derived insights into optimal wording of items, and what instructor behaviours students think of when asked about motivationally supportive teaching, which provides a much-needed link between motivation theory and practice in the classroom.

Authors