Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Session Type
Personal Schedule
Sign In
Access for All
Exhibit Hall
Hotels
WiFi
Search Tips
Annual Meeting App
Onsite Guide
Uncertainty is a standard feature of quantitative sociology, as evidenced by the use of confidence intervals and significance tests designed to account for the uncertainty that arises as a result of random sampling. There is, however, a growing interest in extending traditional approaches to capture the uncertainty associated with the choice of model specification. While there are many choices that go in to defining a model, the choice of controls is among the most significant. This raises the question of whether a given result is robust in the face of changes to the adjustment set. A popular way to address this problem is to run a separate model for each combination of doubtful controls and then summarize the resulting distribution of estimates. As I show, this approach is not only computationally inefficient, but anti-conservative. With these problems in mind, I develop a simple parametric test for the average feasible effect. Simulation shows that the proposed test works as expected, providing novel results that are not otherwise accessible using existing methods such as computational robustness analysis and specification curve analysis.