Individual Submission Summary
Share...

Direct link:

Limited Vision: The Undersampled Majority

Fri, May 25, 11:00 to 12:15, Hilton Prague, Floor: M, Karlin III

Abstract

While prior work has studied the disparate impact risks of big data (Barocas and Selbst, 2016), few have characterized demographic bias in the data that is used to train and benchmark data-centric technology like facial recognition software(Han and Jain, 2014). Unaddressed, bias in training data can result in algorithms that perform poorly on underrepresented groups. Unaltered, skewed benchmark data can mask performance differences between genders, ethnicities, and other demographic categories. In the case of computer vision powered by artificial intelligence, skewed benchmarks and aggregate metrics can mask performance disparities between individuals with different phenotypic features like skin type and facial geometry. This work focuses exclusively on facial analysis in computer vision to demonstrate the more general need for inclusive benchmark data and disaggregated accuracy metrics across a range of human-focused automated tasks. Inclusive and ethical artificial intelligence will necessitate intersectional data to mitigate algorithmic bias.

Author