A recent study by the National Institute of Standards of Technology revealed that facial recognition software delivers flawed results when assessing minority populations. The study has sparked yet another debate about the controversial technology, given its frequent use in the apprehension of suspects.
The NIST study tested nearly 200 facial recognition algorithms. The results illustrated a higher rate of misidentification and other errors among subjects who identified as people of color.
False positives refer to the instances in which the software identified an incorrect match; false negatives refer to situations in which the software failed to recognize a match. The report also showed greater instances of false positives among African American women.
The study has given advocacy groups and politicians more ammunition to seek a ban on the use of the technology.
“Not only could it be used to enable invasive, unnecessary and harmful government surveillance, its inaccuracies disproportionately affect vulnerable communities,” Rep. Rashida Tlaib said via Twitter last week.
Proud to join the call for @HUDgov to review facial recognition technology use in federally assisted housing.
Not only could it be used to enable invasive, unnecessary and harmful government surveillance, its inaccuracies disproportionately affect vulnerable communities. pic.twitter.com/lcOIImBq9E
— Congresswoman Rashida Tlaib (@RepRashida) December 20, 2019
This is not the first time that facial recognition technology has come under scrutiny. For the misidentified, the implications are serious. Inaccuracies could open the door to false arrests, unwarranted surveillance, impostors, and other civil rights violations. They could also lead to unchecked discrimination in employment, housing, and other arenas.