The Legal Aid Society has asked the city Department of Investigation to look into the NYPD’s use of facial technology and apparent violations of its own internal policies.

In a letter sent Monday, lawyers with Legal Aid cited reporting by THE CITY on an incident in which the NYPD circumvented its own restrictions on facial recognition searches in order to track down a pro-Palestinian protester at Columbia University.   

In that incident, NYPD detectives relied on a Fire Department marshal’s access to Clearview AI to identify Zuhdi Ahmed, who was accused of hurling an object at a student and later charged with a hate crime.

Many law enforcement agencies use Clearview AI software, which matches photos uploaded to its system with billions of images in a database sourced from social media and other websites. The NYPD, however, is not allowed to: police are limited to image searches in a repository containing arrest and parole photos.

A city law called the POST Act requires the NYPD to report publicly on its use of and policies regarding surveillance technologies, but the DOI has found the NYPD has not consistently complied. City Council members are drafting legislation to tighten up the POST Act.

Legal Aid’s letter demanding a probe by DOI’s NYPD Inspector General also cited another case in which the NYPD wrongfully arrested a man — who then spent two days in jail — after relying on facial recognition technology. As the New York Times reported, the man arrested was significantly taller than the person who had been accused of exposing himself to a woman in Manhattan, but his image was included in an array of photos presented to the victim. She picked his photo from the array.

Though the case against him was dismissed, the NYPD’s use of facial recognition technology raises alarms, said Diane Akerman, staff attorney with Legal Aid’s Digital Forensics Unit.

“It has become so clear that the NYPD cannot be trusted with facial recognition technology. They cannot even do the bare minimum in making sure it will not lead to false arrests,” she said. “They are actively subverting their own rules, their own minimal guardrails, without any care for the consequences.”

The DOI did not immediately respond to a request for comment. 

An NYPD spokesperson called facial recognition technology an “important tool” but said officers “cannot and will never make an arrest solely using” it.

To conduct searches outside the approved photo repository of parole and arrest photos, officers must get permission from top NYPD officials. Employees who misuse facial recognition technology may face administrative or criminal penalties, according to department policy.

But in one case, emails Legal Aid submitted in court showed, an FDNY marshal accessed Clearview AI at a detective’s request to help the NYPD identify a pro-Palestinian protestor at Columbia University who allegedly threw an object at a student who was counter-protesting. FDNY ran a photo NYPD posted to Instagram through Clearview AI, whichturned up photos of the protestor from high school. The NYPD used that information to figure out who he was.

FDNY has been using Clearview AI since December 2022, and paid for its access through Department of Homeland Security grants, records show.

The Manhattan DA charged him with a felony, assault in the third degree as a hate crime — later reduced to a misdemeanor of second degree aggravated harassment. A criminal court judge in June dismissed the case against the protestor and in a lengthy ruling raised red flags about government surveillance and practices that ran afoul of law enforcement’s own policies.

The judge wrote in her ruling that it was “evident” NYPD’s investigatory steps “clearly contravene official NYPD policy concerning the use of facial recognition.”

Leave a comment

Your email address will not be published. Required fields are marked *