Individual Submission Summary
Share...

Direct link:

(iPoster) Standards of Evidence in Public Policy Evaluation

Sat, September 13, 9:30 to 10:00am PDT (9:30 to 10:00am PDT), TBA

Abstract

This paper explores individual-level standards of evidence in the political domain. We define "standard of evidence" as the threshold at which information collection transitions to decision-making. In particular, we aim to understand what evidentiary standards voters rely on when they evaluate the effectiveness of legislation. While previous studies have provided important insights about how voters reason from existing evidence, current scholarship has paid little attention to the type of information that people consult when they evaluate the causal efficacy of policy interventions.
Our empirical investigation is based on original survey data collected in August 2023. In particular, we conducted a nationally representative online survey in the U.S in which we asked respondents to evaluate the effectiveness of a new policy initiative (cash bail reform). The survey offered subjects different pieces of information to evaluate the effectiveness of the intervention. Among other things, people could view: (a) The number of instances in which cities have / have NOT been exposed to the policy intervention as well as observed public health outcomes for each case group; (b) Evaluations provided by in-group and out-group sources. After reviewing as many pieces of evidence as they like, respondents were then asked to make a final evaluation about the causal effect of the policy intervention.
Following this setup, we then categorized respondents according to the type of evidence they consulted to make an evaluation about the effectiveness of the policy. Our empirical analysis reveals two major findings. First, standards of evidence vary systematically across individuals. In particular, respondents differ across two main dimensions: 1) the amount of first-order / statistical evidence they collect on a given question and 2) the type of expert testimony that they consult when assessing social cause-and-effect relationships. Second, both conservative ideology and people’s overall propensity to engage in cognitive reflection explain at least some of this variation. In particular, more liberal respondents as well as subjects with higher scores on the CRT-7 scale exhibit a more pronounced tendency to collect direct statistical evidence as well as expert testimony from different ideological sources.

Authors