Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Personal Schedule
Sign In
Session Type: Training Session
Statistical models play a central role in scientific analysis, inference, and decision-making, so it is imperative that researchers diligently and thoroughly evaluate their models before disseminating them. This training session offers an immersive exploration of three perspectives on statistical model evaluation. Attendees will gain a theoretical and methodological understanding of (1) traditional goodness-of-fit testing and bootstrapping procedures, (2) Bayesian prior and posterior predictive model checking, and (3) information-theoretic techniques that adhere to the principle of minimum description length. This discussion will culminate in a simple framework that integrates all three perspectives. Session leaders will then demonstrate a user-friendly Shiny software application that allows users to upload data, specify a statistical model, select any or all of the methods within the framework, and generate a customized model evaluation report. Attendees should bring their own laptops, with R and RStudio installed prior to the session.
The intended audience comprises researchers who use statistical models of any form. The methods covered in this session have been applied to item response theory, factor analysis, and structural equation models, and could be extended to other modeling frameworks. There are no prerequisites for this session, other than general familiarity with the practice of statistical modeling.