Skip to content
Advertisement

Recognition software is not accurate when it comes to African-Americans

Advertisement

Facial recognition software, an increasingly popular high-tech crime-fighting tool used by police departments around the U.S., consistently misidentifies African-American, Native American and Asian faces, according to a new federal study, reports the Crime Report.org.

The study by the National Institute of Standards and Technology (NIST), an agency of the Department of Commerce, avoided any recommendation to abandon the software, cautiously noting that some of the algorithms used by some developers were relatively more accurate than others. But it issued a stern warning to customers of the software – most of whom are in law enforcement—to be “aware of these differences and use them to make decisions and to improve future performance.” “Different algorithms perform differently,” emphasized a summary accompanying the report.

Researchers evaluated 189 face recognition algorithms supplied by 99 developers, which the study said represented a “majority of the industry,” and applied them to 18 million images of more than eight million people, using databases provided by the State Department, the Department of Homeland Security (DHS) and the FBI. They found a startling number of “false positives” – incorrect matches between individual faces – for Asian and African-Americans compared to Whites. The factor of error ranged enormously across the algorithms, from 10 to 100.

“Using the higher quality application photos, false positive rates are highest in West and East African and East Asian people, and lowest in Eastern European individuals,” the study said, noting that there were fewer false positives for Asian faces in software developed by China. “We found false positives to be higher in women than men, and this is consistent across algorithms and data sets. This effect is smaller than that due to race. We found elevated false positives in the elderly and in children; the effects were larger in the oldest and youngest, and smallest in middle-aged adults.”

In an equally significant finding, when a single image was matched against a number of faces – a technique used by police and customs officials to check whether an individual was located in a database containing known criminals or terrorists, there were higher rates of false positives for African-American females. Facial recognition is also now widely used in surveillance systems deployed in public areas, aimed at detecting individuals already linked in FBI or DHS databases to terror groups or to spot wanted criminals or missing persons in a crowd. Researchers said later the wide variation in errors confirmed the fears of critics that the technology was riddled with “algorithmic bias.”

The study was a “sobering reminder that facial recognition technology has consequential technical limitations alongside posing threats to civil rights and liberties,” NIST researcher Joy Buolamwini told the Washington Post. Questions about the technology have persuaded some cities against purchasing it for their police departments. The western Massachusetts city of Springfield decided against it after weighing the technology’s potential to deliver racially flawed results, effectively anticipating the NIST findings.

“I’m a Black woman and I’m dark,” Springfield councilor Tracye Whitfield told Police Commissioner Cheryl Clapprood, who is White. “I cannot approve something that’s going to target me more than it will target you.”

Advertisement

Latest