Search
Program Calendar
Browse By Day
Browse By Person
Browse By Room
Browse By Category
Browse By Session Type
Browse By Research Area
Search Tips
ASC Home
Personal Schedule
Sign In
Human decision making is flawed. Evidence suggest that judges impose harsher sentences on less sleep, avoid changing the status quo on low blood sugar, and unduly punish defendants after their football team suffers a loss. Consequently, the promise of dispassionate untiring machines making data-driven decisions seems a welcome innovation. However, such systems are themselves subject to bias. Created by people and trained on historical data, poorly-crafted algorithms can bake in existing biases, obscuring them behind a gloss of mathematics. It is necessary to carefully consider how such models are used and constructed. An error-prone risk model may be welcome when used to help triage scarce social aid but morally unconscionable when used to determine sentencing. This presentation will explore the question of how one should best approach the construction and use of such models, including a focus on understanding what it is a model is optimizing (its definition of success) and how this should constrain its use.