Paper Summary
Share...

Direct link:

Trifactor Mixture Modeling for Multi-Informant Assessment: A Simulation Study

Fri, April 12, 4:55 to 6:25pm, Philadelphia Marriott Downtown, Floor: Level 4, Room 403

Abstract

Multi-informant assessment is considered as a standard and widely used in educational and psychological research. For example, student behaviors are assessed by teachers, parents, and peers as well as students themselves. The trifactor model (Bauer et al., 2013) is useful to integrate information from multiple sources and assess multi-informants’ agreement and disagreement about the construct being measured, accounting for measurement error and item bias because the trifactor model decomposes the variances of multi-informant scores into four orthogonal components: common factor between informants, informant unique perspective factors, item specific factors (or item bias) and measurement error.

Recently, Kim and von der Embse (2021) demonstrated the trifactor mixture model combining the trifactor model and mixture modeling to explore potential heterogeneity across individuals in terms of common and informant unique perspective factors. Based on student and teacher ratings of student academic behaviors, they identified three latent classes: congruent, high teacher-perspective, and high student-perspective classes. However, the performance of trifactor mixture model to detect population heterogeneity is unknown and needs to be fully investigated. In this study, we conduct a Monte Carlo simulation study to systematically examine the adequacy of trifactor mixture modeling in detecting unknown heterogeneity under various conditions of multi-informant assessment.

To systematically investigate the performance of trifactor mixture modeling, design factors included the number of items (4 or 8 per informant perspective factor), number of latent classes (2, 3), class separation (small, moderate, large), explained common variance (von der Embse et al., 2023; .8 or .5), sample size (500, 1000, 2000, 5000), and class proportions (equal, unequal). There were a total of 192 conditions. Data were generated in Mplus 8.8 (Muthén & Muthén, 1998-2017) and two hundred replications were simulated per condition. Simulation outcomes were correct class enumeration rate (for AIC, BIC, saBIC, Lo-Mendell-Rubin or LMR test, and the adjusted LMR test), parameter recovery including relative bias of latent factor means and factor loadings, and class assignment accuracy.

Based on simulation study findings, we aim to provide specific guidelines and recommendations such as regarding sample size and model selection strategies for applied researchers who use the trifactor mixture model with multi-informant assessment data.
This study will contribute to methodology in multi-informant assessment research by providing a statistical means for researchers not only to integrate unique perspectives of multiple informants but also to investigate potential heterogeneity across individuals when multiple informant scores are used to assess targets’ behaviors.

Authors