Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Division
Browse By Session Type
Search Tips
Personal Schedule
Sign In
While prior work has studied the disparate impact risks of big data (Barocas and Selbst, 2016), few have characterized demographic bias in the data that is used to train and benchmark data-centric technology like facial recognition software(Han and Jain, 2014). Unaddressed, bias in training data can result in algorithms that perform poorly on underrepresented groups. Unaltered, skewed benchmark data can mask performance differences between genders, ethnicities, and other demographic categories. In the case of computer vision powered by artificial intelligence, skewed benchmarks and aggregate metrics can mask performance disparities between individuals with different phenotypic features like skin type and facial geometry. This work focuses exclusively on facial analysis in computer vision to demonstrate the more general need for inclusive benchmark data and disaggregated accuracy metrics across a range of human-focused automated tasks. Inclusive and ethical artificial intelligence will necessitate intersectional data to mitigate algorithmic bias.