Paper Summary
Share...

Direct link:

EdTech Research and Development based on Logic Models

Fri, April 12, 4:55 to 6:25pm, Philadelphia Marriott Downtown, Floor: Level 4, Franklin 8

Abstract

In this presentation we describe [organization]’s process for structuring user research on EdTech tools to guide design and development. Our approach begins with constructing a logic model, which is a working theory supported by existing research that describes how the product is intended to improve student outcomes. Although logic models can take many forms, we have found the four column logic model consisting of inputs, activities, outputs and outcomes a useful framework. Inputs specify the core features of the product that make the product work. Activities are how users - typically students and teachers - are expected to engage with the core features to improve learning. Outputs are immediately observable changes in the users or educational systems that are a result of the activities. Outcomes are short and long-term impacts.

Constructing a logic model is a collaborative process between developers and researchers, that is informed by learning theory, research, and development constraints. Logic models force the team to build on past research, specify the products’ core features, and articulate how users are intended to interact with these features to drive the learning. This articulation is important because EdTech design typically requires interdisciplinary teams of diverse training (e.g., software development, learning science, business, education research, curriculum and instruction, etc.). Thus, logic models provide a common language for interdisciplinary teams in understanding how the product is intended to work. Second, and perhaps most importantly, a logic model reveals assumptions about the product, which can then be tested as part of a formative research program.

We structure our EdTech research programs on the assumptions in the logic model. For example, if a products’ core feature involves the provision of immediate feedback, a key research question is whether the feedback is comprehensible and meaningful to the target users. If students are expected to engage with the content for several weeks, we can measure student engagement over that span in a classroom study. Failure to validate the model can inform meaningful changes to the product, and ultimately the model. For example, if users do not engage with the product over the intended period, the content may need to be more age appropriate and engaging, or there may not be enough content for that length of time. Thus, inconsistencies in study findings and the working theory can result in opportunities to make data-informed improvements to the product.

Testing the validity of the logic model is a critical step on the road to establishing evidence of effectiveness. Education developers are naturally eager to establish their products’ effectiveness, but many educational studies do not result in positive findings, leading developers to question what went wrong and scramble for reasons that explain away implementation failures. By testing the logic model as a first step, we obtain data-informed insights about how to improve implementations, and build time to make the necessary improvements. Engaging in this process early is critical to ensuring readiness for a larger scale test of the products’ impact.

Authors