Search
Browse By Day
Browse By Time
Browse By Person
Browse By Policy Area
Browse By Session Type
Browse By Keyword
Program Calendar
Personal Schedule
Sign In
Search Tips
During the past two decades, value-added measures (VAMs) have become a central component of research, evaluation, and policymaking. Test-based value-added measures (TVAMs) have been rigorously evaluated in both experimental and nonexperimental settings, where they have been shown to provide unbiased estimates of teacher contributions to students’ short-run learning as measured by test scores. While TVAMs are attractive in part because they predict later key student outcomes such as college attendance and earnings, they only capture a small portion of teachers’ overall contributions towards these long-run outcomes. As a result, more recent research has used VAMs to assess teachers’ contributions to short-run nontest student outcomes (e.g., attendance, discipline, grades, and retention). Nontest value-added measures (NVAMs) appear to better predict teachers’ contributions to some long-run student outcomes such as high school graduation and college enrollment. This suggests that incorporating both test and nontest outcomes could yield more comprehensive measurements of teaching effectiveness that better predict longer-run student outcomes.
The spread of both TVAMs and NVAMs raises the question of how to prioritize each measure. Suppose a policymaker wants to know which teachers are most effective at developing skills in their students that leads them to be more likely to choose to enroll in college. How should TVAMs and NVAMs be jointly incorporated? Researchers and practitioners thus currently face a bewildering array of decisions about how to use various measures in order to provide evidence on which teachers might be improving the long-run outcomes of their students.
We therefore propose a new method that uses the observed relationships between short- and long-run student outcomes to dictate the weights on short-run measures used to estimate teacher VAMs. Using a hold-out sample of students, we first use machine learning techniques to estimate the observed relationship between short- and long-run student outcomes. Using a completely distinct set of students from the hold-out sample, we then estimate teacher value-added to these predicted long-run outcomes; that is, we estimate value-added models with predicted long-run outcomes on the left hand side. We refer to these novel VAMs as combined value-added models (CVAMs) because they combine both test and nontest measures.
We then demonstrate that for key outcomes such as high school graduation and college attendance, teacher effects on these actual long-run outcomes are similar to teacher effects on predicted long-run outcomes based on contributions to short-run student outcomes. This means that credible measures of value-added to high school graduation and college enrollment can be estimated without needing to wait for these long-run variables to become available. Finally, we compare the predictive power of CVAMs for long-run outcomes to that of traditional test-based value-added measures. CVAMs are substantially more predictive of long-run outcomes than traditional test-based value-added. For example, relative to a one standard deviation increase in test-based value-added, a one standard deviation increase in CVAM leads to about a 9 times larger increase in high school graduation and about a 3 times larger increase in college enrollment.