Individual Submission Summary
Share...

Direct link:

Large collaborations in experimental psychology - ManyBabies and the Psychological Science Accelerator_

Fri, October 5, 4:45 to 6:15pm, Doubletree Hilton, Room: Fiesta II and III

Abstract

The growth in large-scale collaborations by experimental psychologists has led to new kinds of
datasets that are richer and more highly structured, presenting new possibilities for secondary analyses. The parameters of data collection in these projects are transparent and well specified, because of the inherent demands for extensive shared documentation of methods and
analyses. However, researchers embarking on these projects are primarily trained in
smaller-scale research, and must learn how to effectively document their studies to allow for
secondary analysis. The initial products of two such collaborations are described in this
abstract. A dialog with potential secondary users will advance best practices so that these large
datasets can robustly support secondary analyses.

The ManyBabies project (Frank et al., 2017) aims to build large datasets that validate important
findings in developmental psychology and explore how we can best collect and analyze these
data. The first project (MB1) involves over 50 labs conducting a study on over 1,500 infants’
attention to child-directed speech. We believe it is the largest ever lab-based study of infant
cognition. MB1 is conducted as a Registered Report (pre-registration: https://osf.io/r86u7/) and
data collection will finish by April 2018. In addition to the primary looking-time measures, MB1
includes a variety of non-identifiable variables including both incidental (hours awake) and
demographic factors (number of siblings). A “walkthrough” video of each laboratory also allows
the coding of additional methodological features of interest.

The Psychological Science Accelerator (PSA, Chartier et al., under review)) is another network
of laboratories (currently 210), representing 45 countries on all six populated continents, which
aims to accelerate reliable and generalizable evidence in psychology. Their first round of data
collection will attempt to generalize a study of face perception (Oosterhof & Todorov, 2008),
examining potential cultural differences in how adults rate faces for traits like trustworthiness.
Materials are translated into 22 (and counting) languages, and a minimum of 5,750 participants
are expected. PSA datasets will be of interest to child researchers for their cross-cultural
perspective, and future datasets may address developmental questions and populations.

In both projects, we aim to clarify standards of best practice for research communities moving
towards producing rich datasets like these; both are conducted with open data-sharing in mind.

Authors