Advancing Racial Equality & Social Justice

High tech can heighten discrimination; here are some policy recommendations for its ethical use

  •  
  •  
  •  
  •  
  • Print.

facial recognition

Image from Shutterstock.com.

From federal surveillance of social justice protests to facial recognition technology that results in inordinately high false positives for certain demographic groups, recent surveillance trends have deep historical roots and troubling future implications for traditionally marginalized groups. These trends threaten our core constitutional values, democratic principles and the rule of law.

Indeed, technology deployed in policing, public housing, education and other areas of public life has often exacerbated discrimination against communities of color, low-income communities and politically active communities. Representative technologies include facial recognition, predictive policing and bail predictions.

For instance, facial recognition allows police officers to match a smartphone photo against mug shots stored in an official database. According to research from MIT and Stanford, however, this technology is more likely to misidentify people of color and dark-skinned women.

At airports, where facial recognition has become pervasive, flawed algorithms mean that some groups are subject to more security screenings than others. Even landlords have begun installing the smart tech in housing units, raising privacy concerns about how biometric data is collected, stored and used without tenants’ permission.

logo

But facial recognition is not the only smart tech amplifying bias against impacted communities. Predictive policing refers to computer software employed to predict trends about crime, including a person’s likelihood to engage in criminality based upon his or her characteristics, identity and/or location. Naturally, this can contribute to unwarranted suspicion, culminating in police harassment and/or surveillance.

These effects may prove particularly true of certain demographics, including Black, Indigenous and people of color, some of whom may be densely concentrated in urban neighborhoods where crimes occur with higher frequency.

For instance, in Chicago, nearly 400,000 people landed on a “Strategic Subject List” in the city’s bid to prevent gun violence. An algorithm, attempting to predict who is most likely to engage in such crimes, or be victimized by them, produced the list. Of those, police identified 1,500 as being responsible for the majority of the violence.

According to a 2017 Chicago Sun-Times investigation, 85 percent of those who ranked with the highest risk score as potential offenders were African Americans. In response to large-scale roundups, advocates accused the city of penalizing citizens for offenses they have not actually committed. Ultimately, the Chicago Police Department discontinued the program on Nov. 1, 2019.

Similarly, algorithms employed in bail predictions are also likely to reproduce inequalities. In New Jersey, where judges are required to consider the results from an algorithm that predicts one’s likelihood to skip court or commit another crime, advocacy groups have called for a related ban because the tech replicates rather reduces racial disparities.

Against this background, the ABA Section of Civil Rights and Social Justice Rights of Immigrants Committee, in strategic partnership with the German Marshall Fund of the United States, hosted a multi-part policy summit. The virtual convening featured a panel of interdisciplinary experts who developed innovative, practical, community-driven policy solutions to these pressing problems.

Here are some of its recommendations.

Issue a moratorium on new technologies

This is for technologies that can be used for racial and religious profiling, mass surveillance and policing until there is a national law ensuring its ethical use.

Companies must safeguard against new technology that manifests biased results while undermining privacy interests prior to local, state and federal government use. To this end, legislative action is required to regulate all technologies that can expand government surveillance powers and/or amplify racial, religious, ethnic, gendered and other disparities in policing historically marginalized communities. Stronger regulations will ensure more ethical use.

Members of Congress, for example, previously introduced several bills to address facial recognition technology, including the Facial Recognition and Biometric Technology Moratorium Act of 2020. The bill prohibits biometric surveillance—computer software that involves facial recognition—by the federal government without prior explicit congressional authorization. It also withholds certain federal public safety grants from state and local governments that engage in such surveillance.

Notably, in the aftermath of last year’s racial justice protests against police brutality, some tech giants instituted their own moratoriums. For instance, Amazon announced a one-year moratorium on selling its facial recognition technology to police departments until Congress can enact appropriate regulations.

In addition, Microsoft refused to sell its facial recognition technology to police forces until there is a national law ensuring ethical use. And, IBM condemned all technology that can be used for racial profiling and mass surveillance while withdrawing from the facial recognition market altogether.

Engage a race equity analysis

Lack of awareness often informs discriminatory design of technologies that exacerbate criminal justice disparities. To avoid such impacts on Black, Indigenous and people of color, tech companies should engage a racial equity analysis—rather than focusing exclusively on profit—to better understand how new and existing products may contribute to the dynamics of racial, ethnic, religious, gendered and other forms of discrimination.

This is particularly so in light of long-standing patterns surrounding bias in policing in both criminal and national security contexts. Significantly, some tech companies have prioritized this equity analysis. For instance, Parity, an AI startup, has designed a platform to address bias in algorithmic systems. In doing so, it relies on data from impacted communities.

Engy Abdelkader headshotEngy Abdelkader.

Enact transparency and oversight laws

In light of the exacerbated impact of surveillance and other technologies on communities of color, low-income communities and politically active communities, federal, state and local laws should ensure transparency and oversight of the funding, acquisition and use of tech in policing, public housing, schools and other aspects of public life.

These measures should also empower communities in those processes. For instance, some cities and states have adopted Community Control Over Police Surveillance laws that require transparency, provide strong oversight mechanisms, and allow for community feedback on surveillance and other police technologies prior to purchase and deployment.

Representative ordinances prohibit the local funding, acquisition or use of surveillance technologies without the explicit approval of elected city representatives. They may also require explicit community approval prior to police use of existing technologies in new ways. Additionally, these ordinances can also mandate public hearings to allow for open debate and an informed citizenry, prior to official deployment. Such laws have been adopted in New York City, San Francisco, Seattle and 11 other cities around the country.

Ask new and different questions

The questions asked often determine the answers given. For instance, if we ask that technology ensure that international borders allow immigrants the ability to escape persecution abroad, the solution may be quite distinct than focusing exclusively on border security interests. The questions we ask technology to solve should ensure a more humane, safe and equal world. To that end, to realize new futures, we should contemplate new questions.


Engy Abdelkader is chair of the ABA Section of Civil Rights and Social Justice Rights of Immigrants Committee and a fellow with the German Marshall Fund of the United States. She teaches at Rutgers University.

Give us feedback, share a story tip or update, or report an error.