Individual Submission Summary
Share...

Direct link:

Barriers to evidence adoption: a conjoint analysis of policymakers’ preferences for evidence dimensions

Saturday, November 15, 8:30 to 10:00am, Property: Hyatt Regency Seattle, Floor: 6th Floor, Room: 606 - Twisp

Abstract

Public policy research has seen a surge in production of high-quality evidence. And while policymakers can now rely on a solid evidence base in many areas, much less is known about whether they use this evidence in practice. Recent findings suggest low rates of evidence adoption. For example, DellaVigna et al. (2024) document that less than 30% of nudges tested in collaboration with a national nudge unit were later implemented by cities. Related research has started to explore whether policymakers are insensitive to important dimensions of evidence (Vivalt & Coville, 2023; Xu et al., 2024) and whether decision aids or informational treatments can increase evidence adoption (Toma & Bell, 2024; Hjort et al., 2021).


In this research, we build on this work and ask which evidence dimensions policymakers find valuable when deciding whether to implement a policy. In a series of conjoint experiments, we present policymakers with two hypothetical programs aimed at reducing burnout in public sector employees. Each program is described in terms of its “evidence dimensions:” evaluation methods, sample sizes, effect sizes, statistical significance, long-term impacts, program costs, replication efforts, employee buy-in, etc. Participants are asked to choose which of the two programs they would recommend for implementation.


We find that participants value the effect sizes reported in our hypothetical programs above and beyond any other evidence dimension. Programs with large positive effects – either in the short or long-term – are more than 20pp more likely to be recommended for implementation. Participants also display a slight preference for more recent research and large sample sizes, as well as employee buy-in, a dimension not typically considered by researchers.


At the same time, they are remarkably insensitive to other highly relevant dimensions of evidence. For example, participants are no more likely to recommend implementing a program that was evaluated in a randomized controlled trial (RCT) than a program that was evaluated in a focus group. Surprisingly, participants recommendations are also largely unaffected by information on program costs.


We contrast these findings with a landscape analysis of more than 1,000 published RCTs and policy evaluations drawn from evidence clearinghouses, to understand whether researchers provide policymakers with the evidence dimensions they seek. For example, only roughly one third of RCTs in our sample report long-term effects and only less than 1% of papers discuss employee buy-in, suggesting a gap between the evidence dimensions policymakers value and those that academics report.


Our results constitute a first building block towards a better understanding of the barriers for evidence adoption in the public sector. We demonstrate that policymakers value some evidence dimensions more than others – and that there exists a mismatch between the information practitioners seek and the information researchers typically report. Our future research will ask whether shifting policymakers’ understanding of evidence dimensions changes how policymaker value these dimensions, or whether these valuations reflect genuine preferences that researchers should cater to. Ultimately, our work helps bridge the gap between policymakers and researchers in highlighting where interpretations and valuations of evidence dimensions might diverge.

Author