Search
Browse By Day
Browse By Time
Browse By Person
Browse By Policy Area
Browse By Session Type
Browse By Keyword
Program Calendar
Personal Schedule
Sign In
Search Tips
The past two decades witnessed growing interest by policy makers and private funders in the application of evidence to inform both policy and program design. For example, the 2016 Evidence-Based Policymaking Commission Act called for improving evidence available for making decisions about government programs and established the Commission on Evidence-Based Policy Making. As a result, funders often prioritize organizations using interventions designated as an evidence-based practice or EBP, meaning the practice has been rigorously evaluated and found effective.
But how can funders and implementers understand what is evidence-based? One approach is developing evidence-based clearinghouses that review studies on interventions and rate the evidence. Clearinghouses are heralded as an important vehicle for translating research to practice, making it easier for implementers to find EBPs that address social problems they face, and hopefully increasing the likelihood that implementers will realize intended outcomes.
But, as this paper shows, choosing what works and implementing what works pose distinct challenges. We study the implementation of the 2018 Families First Prevention Services Act (FFPSA) to understand how implementation requirements affected utilization of EBPs. FFPSA sought to reshape the child welfare landscape by incentivizing EBP use and allowing states to draw down Title IV-E funds for practices designated as EBPs through a clearinghouse. The expectation was that states would tailor practices to meet diverse needs of populations. We find that while states vary widely in child welfare infrastructure and needs, there is little variation in EBPs chosen and limited draw down of federal funds to date, almost seven years after the law’s passage. These findings run counter to what one might expect given the promise of EBPs.
Using implementation data from 35 states as well as detailed case studies from two states, we find several limitations affecting FFPSA implementation. First, putting EBPs into use often requires high levels of operational and personnel capacity that many states do not possess. Many EBPs in the federal FFPSA clearinghouse are longstanding programs requiring extensive training and certification. Many states lack funding and personnel to engage in such programs. Second, most states lack the evaluation infrastructure needed to comply with reporting requirements. To draw down funds, states must provide evidence on implementation fidelity by developing a reporting system for ongoing reporting and evaluation—a costly endeavor. Finally, the burden of evidence production falls on states if they choose to use EBPs that are included in the clearinghouse but do not have the highest evidence rating of “well-supported.” In these cases, states must also produce new evidence on effectiveness, which almost no state can afford, limiting experimentation.
As a result of these implementation constraints, states exhibit little variation in the EBPs included in plans filed with the federal government. The set of EBPs is further narrowed in implementation, as states choose to implement EBPs that align with their evidence-production capabilities. The result is very limited draw down of Title IV-E funds across states, suggesting that implementation requirements thwart full adoption of a policy that initially had strong support across the policy field.