Individual Submission Summary
Share...

Direct link:

ManyBabies - Using Larg(er) Experimental Datasets for Methodological and Theoretical Questions

Fri, October 5, 3:00 to 4:30pm, Doubletree Hilton, Room: Coronado

Abstract

Lab-based experiments in developmental psychology have historically been hampered by the difficulty of running high-powered studies on infant populations. This has presented challenges ranging from concerns about the replicability of key effects to uncertainty about how seemingly minor aspects of our experimental protocol affect effect size and fuss-out rates. The ManyBabies Consortium has recently concluded data collection on its first large-scale replication, an experimental test of the Infant Directed Speech (IDS) preference across three experimental paradigms, over 60 labs, and over 2700 babies. Along with the central IDS and looking time variables, we asked labs to report on a variety of incidental 'lab factors' - from time of day to room size to experimenter training - that have been offered as anecdotal reasons why a baby may fuss out of a study. Thus, in addition to providing more precise estimates of the size and development of this effect in infants and young toddlers, this dataset provides a novel opportunity to learn about how lab practices may contribute to variation in fuss-out rates. This talk will discuss preliminary evidence from the ManyBabies1 dataset, approaches for robust secondary analyses of these datasets, and plans for data collection alongside upcoming ManyBabies projects.

Author