On an ordinary walk in the city, a New Yorker may pass by dozens of cameras without ever noticing them, mounted on traffic lights, storefronts, and apartment buildings. Many of them also might not know that there are more than 25,000 of them spread throughout the city, all part of a wide surveillance network that leaves no part of the city untouched, allowing the NYPD to effectively track everyone who moves through the city with facial recognition software and other surveillance tools.

Although these technologies are often justified as tools for local public safety, they are also heavily abused by law enforcement and target Black and Brown communities, where the concentrations of cameras are disproportionately high. The violations of the health and human rights of communities in New York City and around the world as a consequence of AI and surveillance policing technologies are widely acknowledged by human rights organizations like Amnesty International and Human Rights Watch, but there is still a long journey ahead in raising awareness of these issues among the public and holding law enforcement accountable for their abuse of data and privacy. 

The lack of transparency and accountability exhibited by the NYPD in their handling of sensitive data collected from facial recognition software has only grown to be even more alarming in recent years, as surveillance technologies have been used across the country to illegally target and deport undocumented immigrants. These practices pose direct threats to the dignity, privacy, and safety of everyone who calls this city home, and they also have particular consequences for migrants and their families, further threatening New York’s status as a “sanctuary city.” 

Department of Homeland Security (DHS) fusion centers have routinely enabled Immigration and Customs Enforcement (ICE) to co-opt local police databases and surveillance tools that they would otherwise not be legally allowed to use for deportation purposes, violating state and local protections for undocumented immigrants. As a result, information collected during routine encounters with local police can later be used to identify, detain, or deport migrants, all without facing any consequences.

These practices constitute clear human rights violations. The Universal Declaration of Human Rights (UDHR) provides a valuable framework for evaluating the ethical implications of over-surveillance and predictive policing. Article 7’s guarantee of equal protection under the law is violated when biased data and algorithms reinforce discrimination. Article 9’s protection against arbitrary arrest is undermined when algorithmic predictions or facial recognition matches are treated as sufficient evidence. Most directly, Article 12’s protection against arbitrary interference with privacy and family life is violated through the collection, retention, and sharing of biometric and personal data.

All of these violations of fundamental human rights are also inseparably linked to physical, psychological, and social impacts on health. Living in neighborhoods with dense surveillance is associated with higher levels of psychological distress. For migrants, the knowledge that surveillance data may be shared with ICE could also discourage seeking medical care, accessing social services, or participating in community life, directly undermining health care accessibility and availability. These psychological harms compound on physical health risks, including lower quality of sleep, decreased immunity, and increased risk for stress‑related conditions such as diabetes and heart disease. 

The lack of transparency in the development and deployment of these policing technologies complicates efforts to address these rights violations. The algorithms used by law enforcement agencies are often proprietary, making it difficult for independent researchers and civil rights organizations to scrutinize their decision-making processes, which hinders accountability and prevents meaningful public oversight. However, we have already seen incredible victories from human rights and civil rights organizations in their campaigns against oversurveillance and violations of privacy by police departments, including the ACLU’s lawsuit and campaign to bring justice to victims unjustly arrested due to surveillance data (Williams, 2021), and the Brennan Center’s various lawsuits against the NYPD for their use of social media monitoring and data mining. By continuing to mobilize around stories of individuals who have been treated unjustly as a result of these technologies and putting pressure on our local leaders to limit intelligence and data sharing partnerships with federal law enforcement, organizations, and individuals can help aid in these efforts.

Attempting to make predictions about criminality and legal status “without bias” will always be futile, and as long as the fundamental, systemic problems in our policing system remain unaddressed, these technologies and their predictions will only compound the violations of justice that are already perpetrated by it. Privacy and dignity are human rights, and they are prerequisites for health. As members of our communities, by learning about the way our data is used and advocating for transparency and accountability when it comes to the use of our private information, perhaps we can protect the most vulnerable in our community and move one step closer to justice. 

Raga Mandali is a writer and a Master’s in Bioethics student at NYU, studying the intersection of healthcare, philosophy, and politics.

Leave a comment

Your email address will not be published. Required fields are marked *