This week I am turning my column over to Ojani-Pierre Ruphin Walthrust, a MPA student at Columbia SIPA concentrating in Urban Policy. He is a 2020 Pickering Foreign Affairs Fellow and will be entering the Foreign Service as a U.S. diplomat in Fall 2022 after graduating. *The opinions expressed are Ojani-Pierre Ruphin Walthrust’s and not those of the Department of State.*
Mayor Eric Adams recently embraced the use of Artificial Intelligence (AI) facial recognition technologies by the police in New York City and advocated for their use for “investigative purposes” in order to crack down on the 38.5% rise in major crime. The software has often been used in criminal justice and throws up a flag when it spots a match with a degree of probability, but is not fully accurate. Adams expressed his desire to use “facial recognition” and “new tools that can spot those carrying weapons to identify problems.” However, his plan fails to consider their disproportionate impact on Black Americans. He must address the systemic bias within AI facial recognition and come up with more concrete, proactive actions before he decides to use it.
Surveillance of Black people in the U.S. dates back to the 18th century lantern laws that demanded Black, mixed-race and Indigenous enslaved people carry candle lanterns with them if they walked around the city after sunset, and not in the company of a white person. Those who did not comply faced many punishments. In the 20th century, the FBI tracked the political activity of every single Black college student in the country. During the War on Drugs, President Reagan had wiretaps and sneak-and-peak warrants to prosecute people for drug offenses, knowing that this would target Black and Brown people in the U.S. disproportionately.
Black Americans have been surveilled by the government for too long and these technologies will further increase the disproportionate targeting of people of color, specifically Black Americans, and violate personal privacy since they are largely unregulated. Proponents of these technologies argue that they can save law enforcement time and lead to increased public safety. They also note benefits in solving banking fraud. However, the algorithms within these technologies can be racially biased because the programmers often do not include a lot of diverse input data. For example, Black scholars Joy Buolamwini and Timnit Gebru found that facial analysis algorithms misclassified Black women about 35% of the time, while nearly always getting it right with white men. If the NYPD uses facial recognition technology as is, it could lead to an increase in wrongful arrests, lengthy detentions and potentially more deadly police violence.
If Mayor Adams wishes to use these technologies, he must first address the problematic history that these AI facial recognition technologies have had on Black Americans. Then, he must form a taskforce with representatives from the NYPD, the community, and tech companies to check their progress and accuracy. Lastly, he should aim to fix the problem systemically by working with the state and federal government and increasing scholarships and grants for programs which attract more data programmers of color and more women.