Search
Program Calendar
Browse By Day
Browse By Person
Browse By Theme Area
Browse By Session Type
Search Tips
Conference Blog
Personal Schedule
Sign In
X (Twitter)
What makes a nonprofit organization effective? There are many potential answers to this question, but increasingly, funders of nonprofits seek quantitative measurements of program effectiveness as proof that human service nonprofits “work.” Such measurements are legible across a wide range of audiences, and carry with them the promise of certain knowledge in an uncertain world. Indeed, nonprofits are increasingly being pulled into alignment with the evidence-based policy movement, started in the U.K. and Australia but now in full swing across the globe, which is pushing governments to restrict social policy funding to those programs which have an established evidence base (Haskins, 2018).
The preferred metric for that evidence base is “net impact,” or the change in individuals that can be directly traced to a specific intervention. The randomized controlled trial (RCT) is often understood as the “gold standard” for determining net impact, and the widespread valorization of RCTs has led many to uncritically accept their use (Deaton & Cartwright, 2018). The idealized scenario is that RCTs should be used to clearly distinguish programs that “work” from those that do not, thereby allowing nonprofits to provide more effective services and maximizing the efficacy of public and private resources. This is an appealing goal, but not one that takes into account many of the considerations faced by those who lead, work in, and support human service nonprofits (Mosley et al., 2019).
This paper, based off our recently completed book, explores why RCTs are increasingly being embraced as the “gold standard” for nonprofit evaluation, how RCTs are carried out inside nonprofits, and what the problems are. We go beyond existing critiques about the RCT method to focus specifically on the nonprofit context: describing what happens inside nonprofits when they take part in RCTs, the unintended equity issues that arise, why nonprofits decide to participate in RCTs given the many challenges of doing so, and how the larger apparatus of RCT evaluation is contributing to these processes. We base these findings on an analysis of the field, as well as interviews with a diverse set of 53 professionals embedded in the RCT ecosystem in the United States: sixteen professional evaluators, sixteen executives working in private foundations, and twenty-one executive directors of nonprofits.
Through our analysis, we identify five specific problems with using RCTs as high-stakes assessments of human service interventions:
1. The “False Certainty” Problem: RCTs aren’t a foolproof method of evaluation.
2. The “Programs Need Organizations” Problem: RCTs assess programs, but programs are embedded in organizations
3. The “Communities Need Organizations” Problem: RCTs threaten the community-level benefits provided by nonprofit organizations
4. The “Rich Get Richer” Problem: RCTs primarily advantage already well-resourced organizations whose way of working is easily adapted to RCT demands.
5. The “Testing Isn’t Learning” Problem: As high-stakes assessments, RCTs may actually hinder learning and innovation
We conclude by pointing to what organizations, funders, and evaluators might do differently if their goals are to promote a nonprofit sector that is intelligent, innovative, and connective.
Deaton, A., & Cartwright, N. (2018). Understanding and misunderstanding randomized controlled trials. Social Science & Medicine, 210, 2–21. https://doi.org/10.1016/j.socscimed.2017.12.005
Haskins, R. (2018). Evidence-Based Policy: The Movement, the Goals, the Issues, the Promise. The ANNALS of the American Academy of Political and Social Science, 678(1), 8–37. https://doi.org/10.1177/0002716218770642
Mosley, J. E., Marwell, N. P., & Ybarra, M. (2019). How the “What Works” Movement is Failing Human Service Organizations, and What Social Work Can Do to Fix It. Human Service Organizations: Management, Leadership & Governance, 43(4), 326–335. https://doi.org/10.1080/23303131.2019.1672598