Paper Summary
Share...

Direct link:

When Is RI-LTA Necessary? A Monte Carlo Comparison of Latent Transition Models

Sat, April 11, 9:45 to 11:15am PDT (9:45 to 11:15am PDT), InterContinental Los Angeles Downtown, Floor: 5th Floor, Echo Park

Abstract

This study examines the conditions under which Random Intercepts Latent Transition Analysis (RI-LTA) is a suitable model for longitudinal latent class data. RI-LTA extends traditional LTA by partitioning out stable, between-person differences, and has been increasingly recommended as a preferred alternative. However, despite its growing use, there has been little large-scale investigation into whether RI-LTA is always appropriate for applied research contexts—especially when the research question itself may depend on stability as well as change. This study responds to that gap by evaluating the performance of both LTA and RI-LTA across a wide range of longitudinal modeling conditions relevant to education and the social sciences.
Two Monte Carlo simulation studies were conducted using MplusAutomation in R, generating over 648,000 datasets across 1,296 unique design cells. Study 1 tested two-class models, systematically varying sample size (500 to 4000), transition probabilities (T₁₁ = .200 to .800), and random intercept loadings (λ = 0 to 1.0). Study 2 extended the analysis to three-class models and introduced additional variation in sample size, class prevalence, item-response strength, and the structure of random intercept loadings (scalar vs. patterned). Each condition included comparisons of correctly specified and misspecified models to assess how each model handles structural mismatch and latent trait complexity in typical applied settings.
Simulation results suggest that RI-LTA recovers parameters more accurately than LTA when trait-like stability is present but not explicitly modeled, especially in high-movement scenarios and when class sizes are uneven. When no stability was present, RI-LTA produced slightly inflated standard errors but retained unbiased point estimates, making it a cautious default when the structure is unknown. Still, this comes with tradeoffs. Because RI-LTA removes between-person variance by design, it can obscure patterns that may be central to a study. For instance, if a researcher is interested in long-term patterns of mental health—rather than day-to-day fluctuations—then RI-LTA may filter out precisely the variance that defines the construct. In such cases, RI-LTA doesn’t fail statistically, but it may misalign with the purpose of the analysis. LTA, in contrast, preserved that variance and more closely reflected the generating model when trait-like stability was present and substantively meaningful. LTA also performed well under conditions of low movement, strong class separation, and limited person-level effects.
The key insight is that neither model is universally better. RI-LTA is a strong choice when the goal is to isolate within-person change, particularly in data with meaningful variability over time. But when stability is part of the phenomenon of interest, RI-LTA may inadvertently filter out relevant patterns. LTA remains a valid option in these cases, especially when supported by theory or prior evidence.
This work highlights the importance of aligning model selection with research goals. Fitting both LTA and RI-LTA provides a practical strategy for identifying trait-like variance and avoiding misinterpretation. These insights are especially relevant in education research, where behavioral constructs often conflate state and trait elements. The study advances methodological clarity by delineating when RI-LTA enhances inference—and when it risks misrepresenting the phenomenon under study.

Authors