Bias in AI

Facial recognition is at the forefront of Big Data and AI. Facial recognition is a subfield of artificial intelligence in which a software algorithm is used to identify an individual’s identity by processing a digital image of an individual’s face. The facial recognition algorithms compare facial features in an image to faces contained within a database. Many industries and organizations use it: law enforcement and security use it to try to track down suspects; social media uses it to tag people; dating sites match people with the same facial features.

Tests by the National Institute of Standards and Technology Huge revealed a huge amount of error with algorithms being accurate only 36% to 87% of the time.

A growing body of research has gone beyond showing overall poor accuracy and has demonstrated that facial recognition error rates are different across demographic groups. Research by Joy Buolamwini found that the maximum error rate for lighter-skinned males was 0.8% while error rates for darker-skinned females were up to 34.5%. The research demonstrates that AI systems are not inherently gender or race neutral.

Joy Buolamwini and Timnit Gebru. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research 81:1–15, 2018 Conference on Fairness, Accountability, and Transparency.

Go back