Paper Summary
Share...

Direct link:

Maximizing Scientific Impact and Optimizing Review Resources Through a Collaborative Approach to Evidence Synthesis

Fri, April 12, 11:25am to 12:55pm, Philadelphia Marriott Downtown, Floor: Level 4, Franklin 8

Abstract

Science is fundamentally collaborative and requires input, knowledge, and expertise from teams of researchers to produce valid and reliable findings. Yet existing incentive structures reward novel individual contributions over cumulative, collaborative efforts. Solitary achievements, such as publishing first-authored papers and securing independent grants, are prioritized in hiring and promotion decisions over team-based accomplishments. This focus on novelty and authorship has undesirable consequences for science and society as a whole, including duplicative efforts, reporting errors, and questionable research practices that lead to squandered resources and low replicability of canonical findings (Bakker & Wicherts, 2011; John et al., 2012; Open Science Collaboration, 2015).


Recognition of this problematic individualistic approach has led to some innovation and reform in primary research. For example, the Psychological Science Accelerator developed a distributed laboratory network that coordinates multi-site data collection on democratically selected research ideas (Moshontz et al., 2018). However, such developments have not yet been applied in evidence synthesis. This is surprising given the increasing prevalence of systematic reviews and meta-analyses in the published literature over the past several decades (Davis et al., 2014).

This paper presents a collaborative approach to evidence synthesis devised from the realization of overlapping efforts in the social-emotional learning space. One team received funding to conduct a systematic review that, within the first project year, they discovered from a Registered Report was already being conducted (and nearing completion) by an independent team of scholars. Rather than drastically pivoting topics, or lamenting their misfortune at being “scooped,” the two teams are now collaborating to address pressing questions raised by the (now published) review while utilizing their screened and coded data.

This incremental approach has not been without challenges–both practical (for example, how to supplement the existing literature search with new terms specific to the collaborative review) and philosophical (for example, determining what constitutes a standalone contribution but maintains sufficient overlap with the existing review to utilize the screened and coded data). Furthermore, with limited recognition for collaborative work, we face uncertainty in how our project will be received within the academic community. Nonetheless, our approach has already highlighted several advantages of collaborative evidence synthesis. By merging project teams, we have gained valuable insights and diverse project expertise. By building closely on the existing review, we are directly and expediently addressing the most pressing questions raised by the published report. By utilizing existing data, we are optimizing efficiency and maximizing the return on investment to our funders.

Although we are encouraged and energized by the success of our collaboration so far, many questions remain. How can we, as evidence synthesists, approach review topics more granularly to build on each other’s findings and utilize existing data? How can open science practices, such as Registered Reports, data sharing, and open materials, be applied more routinely to increase the visibility of in-progress synthesis projects? How can existing incentive structures be modified to acknowledge and reward multi-team-based review work? Lessons learned–and questions raised–from our project will be discussed to help guide the future of collaborative evidence synthesis.

Author