Paper Summary
Share...

Direct link:

Producing Trustworthiness in Studying What Works in Heterodox Curricula Interventions (Poster 8)

Wed, April 23, 12:40 to 2:10pm MDT (12:40 to 2:10pm MDT), The Colorado Convention Center, Floor: Terrace Level, Bluebird Ballroom Room 2A

Abstract

This poster examines the challenges our research team faced in presenting “trustworthy” findings concerning “what works” in a curricula intervention funded by the National Science Foundation’s Innovations in Graduate Education (IGE) program involving graduate students in multiple disciplines.

Patton’s (2011) developmental evaluation model, which is utilization-focused and supportive of ongoing development of innovations, guided the mixed methods design. Exploration of activities took place iteratively over multiple implementations, providing opportunities for program faculty to explore the parameters of program innovations as they took place, making necessary changes in response.
Data analyzed included observations of classroom interactions in three classrooms, analysis of 38 students’ written reflections, and interviews with 33 students and 3 instructors to examine the outcomes for students, and quantitative measurements of cognitive flexibility.

Key questions for the evaluation team focused on examining in what ways, if any (1) novel thinking could be generated, and (2) students learned skills to effectively collaborate with one another. We also wanted to examine the data set for observable and reported learning outcomes.

We found that evaluating what works for whom in classroom contexts is complicated by three issues related to:
· “what”: for this particular curricular intervention, standardization was not possible because of (a) context-specific instructor-student relationships and interactions; and (b) the emergent learning opportunities when students observed and listened to each other were necessarily different every time.
· “works”: student outcomes such as creative and collaborative capacity were not easily assessed with typical positivist modes that people tend to associate with trustworthiness. For this study, no alternative student groups (e.g., control groups) made sense for the purpose of comparing outcomes.
· “for whom”: our small, interdisciplinary cohorts enabled by a graduate school requirement implemented at our institution in 2022 were not only self-selected, but included students from disciplines beyond the STEM and arts majors that were initially envisioned.

Establishing “trustworthiness” has long been understood in qualitative inquiry as involving establishing confidence in the “truth value” of a study’s findings, whether findings can be applied in other contexts with similar outcomes, and the degree to which findings are free from personal biases and conflicts of interest (Lincoln & Guba, 1985, p. 290). This poster examines the unique contextual issues that arose during the project implementation and the methodological challenges we faced in determining how to represent what works in one educational context in ways that are trustworthy. We argue that rather than generating prescriptions for what works in the form of facts and truths that are necessarily transferable to other contexts, findings can yield suggestions that might guide others to create, innovate, and revise practices described. Studies generating findings that might not precisely transfer to other contexts can still prove insightful and valuable for others. Further, admitting to still not knowing, is, we argue, a trustworthy finding of and by itself, and contributes to the generation of other significant research questions.

Authors