![]() ![]() NIST uses a large US government database of passport and visa photos, and will test the algorithm based on different nationalities. Most developers send their algorithms to the United States Technical Standards Agency (NIST), to be tested for differential error rates over different parts of the population. Instead, they slip in during the design and “training” process. These differential error rates are not programmed into the algorithm – if they were, it would be manifestly illegal. Racism can arise when there is a higher rate of error, either false negative or false positive, for a subset of a population – for example, Blacks in the United States. For example, if police are tracking potentially violent suspects, they may want to reduce the number of false negatives so the suspects are less likely to slip through, but this would drive up the number of false positives – in other words, people falsely arrested. Each case requires a determination of the cost of different kinds of errors, and a decision on which kind of errors to prioritise. Alternately, when border-control authorities use facial recognition to determine if a person matches the passport he or she carries, a false positive will lead to the impostor crossing the border with a stolen passport. For example, if the police use a facial recognition algorithm in their efforts to locate a fugitive, a false positive can lead to the wrongful arrest of an innocent person. The consequences of these two errors are different depending on the situation. The second take place when the algorithm says there’s no match, but in fact there should be one. The first occur when the algorithm thinks there’s a positive match between two facial images, but in fact there is no match (this was the case for Robert Williams). Since errors always exist, the question is what is an acceptable level of errors, what kind of errors should be prioritised, and whether you need a strictly identical error rate for every population group.įacial recognition algorithms produce two kinds of errors: false positives and false negatives. Such predictions are never completely error-free, nor can they be. Like any prediction algorithm, facial recognition algorithms make probabilistic predictions based on incomplete data – a blurry photo, for example. ![]() But why do facial recognition algorithms make more mistakes for Blacks than whites, and what can be done about it? To err is human… and algorithmic The troubling aspect of the story is that facial recognition algorithms have been shown to be less accurate for black faces than for white ones. Police released him after several hours and apologised, but the episode raises serious questions about the accuracy of visual recognition algorithms. After a shoplifting incident in in a pricey area of Detroit, Michigan, his driver’s license photo was wrongly matched with a blurry video of the perpetrator. In June 2020, a facial recognition algorithm led to the wrongful arrest of Robert Williams, an African-American, for a crime he did not commit. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |