Paper Summary
Share...

Direct link:

Influences on the Scaling of Digital Learning Resources

Mon, April 11, 11:45am to 1:15pm, Marriott Marquis, Floor: Level Four, Capitol

Abstract

Objectives

This paper explores factors that influence the spread of digital learning resources and compares them to factors associated with consistent effectiveness across implementation sites.

Perspective

By enabling rapid dissemination and bypassing system hierarchies, the Internet has disrupted the linear relationship between learning product maturity and scale. Many developers embrace the strategy of getting a “minimum viable product” into the hands of many users quickly and then improving the product using user feedback. Case in point is the Khan Academy, which grew from a few YouTube videos to a million users in less than four years.

However, widespread use does not necessarily constitute widespread positive learning impact. Drawing on activity theory, we reason that understanding learning impacts and the spread of digital learning innovations requires clarity about what the innovation entails. Although a piece of hardware or new software is the most tangible component of these innovations, it is how these resources are actually used, along with other, often non-digital, resources in instruction that matters (Cohen et al., 2003). Educational effectiveness is the product of an instructional activity system, of which software resources are just one component.

Data Sources and Methods

We conducted a meta-analysis using scaling and learning outcome data from 64 “Next Generation” digital learning projects conducted between 2009 and 2014. Each project’s digital learning resource was coded for software design and implementation model features using a standard set of codes (Author et al., 2014). We obtained information on scale of each of the digital learning products from project reports to the funder. We examined effectiveness in separate meta-analyses using effect sizes for the binary outcome of earning a course credit and for whatever continuous learning outcome measure (e.g. grade or assessment score) a project could provide for treatment and comparison students. After testing for heterogeneity of impacts, we conducted moderator variable analyses to identify factors associated with more positive learning outcomes. Correlational analyses related the same set of product features to the scale achieved (number of users within the grant period).

Findings

The features associated with widespread scale were quite different from those associated with positive impacts. Three factors were related to the scale achieved: 1) absence of a requirement for face-to-face instructor training, 2) a promise of cost savings, and 3) little requirement for changing instructional processes or organizational structures. By contrast, the features associated with positive impacts included support for change in pedagogy and the comprehensiveness of the intervention.

Significance

Many learning technology R&D efforts focus on supplementary resources to be added to existing courses and pedagogy. Our research suggests that more consistent positive outcomes are found for interventions requiring a change in pedagogy and wholesale redesign of an entire course—two factors that make scaling more difficult (Blumenfeld et al., 2000). This tension between scaling and producing consistently positive outcomes has significant implications for learning technology adopters, policymakers, and funders who need to consider innovations involving technology in their entirety, with careful planning of the resources needed to support learning.

Authors