Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
I take an unequivocal stance: media comparison studies should be wholly rejected. These studies have persisted for decades despite their repeated failure to yield meaningful insight into how learning actually happens (Reeves & Lin, 2020). While my own research focuses on immersive technologies, this critique applies across all media. Comparing delivery platforms (print versus e-books, video versus VR, or online versus face-to-face) is a misdirected effort rooted in flawed assumptions (Glaser & Moore, 2023).
The logic rests on the idea that we can isolate the medium and measure its “effectiveness” on learning outcomes. This framing, however, detaches media from pedagogy, context, and learner variability. It assumes media function independently of instructional design. As Clark (1983) argued, media are merely vehicles for instruction. They do not cause learning. Any gains observed can be traced to differences in instructional method, novelty effects, or implementation.
A popular rebuttal I hear is that these studies would be useful if instructional methods were held constant. I want to push back on that idea. This separation is not methodologically sound, nor is it theoretically valid. Media and methods are deeply entangled. The affordances of a medium shape what instructional strategies are possible. You can’t hold method constant while changing media. A video lecture and a VR simulation might use the same script, but they create entirely different cognitive, emotional, and embodied experiences. Elements like attention, feedback, pacing, and agency all shift with the medium and directly affect learning. Ignoring these shifts misses the point, and the issue is compounded by vague or inconsistent terminology in existing media comparison research. For example, “VR” can refer to everything from passive 360-degree videos to fully interactive, motion-tracked simulations. These distinctions matter. Without detailed reporting on how a medium is implemented or how its affordances are leveraged instructionally, results are difficult to interpret and almost impossible to replicate (Girvan, 2018). Yet the published work will broadly try to compare VR (meaning anything from 360 video to fully interactive simulations) to some other form of media (also encompassing a wide and inconsistent range of technologies). This line of research is technocentric, focusing on the novelty or presumed superiority of tools rather than the learning design behind them. It overgeneralizes and ultimately misleads researchers and practitioners alike by offering tidy conclusions where only complexity exists (Jonassen et al., 1994). What’s lost is the nuance of how affordances, aligned with learning goals and learner needs, actually shape the experience.
Lastly, much of this research comes from scholars outside instructional design (i.e. educational psychology or STEM education) who may lack grounding in media theory. Therefore, we see many poorly conceptualized studies that conflate correlation with causation and offer little practical value. The deeper issue is a need for more interdisciplinary training. Graduate programs across fields should integrate instructional design theory, media history, and learning environment complexity. Instead of asking which medium is “better,” we should ask how to design experiences that align media affordances with specific goals and learner needs (Reigeluth & Honebein, 2023).