As facial recognition software becomes more ubiquitous, some governments slam on the brakes
There’s a public discourse playbook when it comes to law enforcement adopting a new technology.
On one side, there are those defending the technology as a means to improve policework and public safety. On the other side are those that worry the technology is too new to be trusted, erodes civil liberties and exacerbates inequalities.
In this scenario, a community weighs the costs and benefits of the adoption of such a technology. After the debate settles, the technology is almost always adopted with varying promises of oversight and transparency.
Facial recognition, however, is tearing up that playbook.
“It’s the first time I’ve seen communities put a moratorium on the use of a surveillance technology,” says Catherine Crump, director of the Samuelson Law, Technology and Public Policy Clinic at UC Berkeley School of Law. She says the bans reflect local communities’ feelings toward their police departments, civil liberties and public safety.
This year, Oakland and San Francisco, California, and Somerville, Massachusetts, all banned the government’s use of facial recognition technology as a larger legislative package overseeing police surveillance technology. Now, Massachusetts, Michigan and New York are looking to, at a minimum, press pause on the use of this technology.
“Good policing does not require that we live in a police state,” says Lee Hepner, legislative aide to Supervisor Aaron Peskin, who championed the bill in San Francisco. He says they balanced people’s privacy interests with the need for public safety and concluded that the power of the technology had a dangerous chilling effect on society through its potential misuse and documented inaccuracies when trying to identify darker-complexion and female faces. The law passed with an 8-1 vote in May.
“We think that while facial recognition technology use does need to be regulated,” says Frank Noto, president of Stop Crime SF, a community organization, but, he adds, “we favor a moratorium rather than a ban.”
To shut down the technology entirely, he argues, forecloses the development of applications and policies that could help identify a kidnap victim or a senior citizen with dementia who has wondered away from his home.
“To say you’re going to ban it forever is stupid, in our opinion,” Noto says.
While the city can overturn the law and allow for the technology’s use in the future, for now the city has put a hard stop on what some see as an Orwellian technology that creates a digital panopticon, affecting due process, diminishing privacy and increasing racial bias. Proponents of the technology, which is being used by other law enforcement across the country, think these concerns are misguided, but few believe facial recognition’s application in the criminal justice system does not require continued scrutiny and increased oversight.
Results May Vary
A hot topic, facal recognition technology has elicited various reactions around the country.
In August, following an examination of the state’s facial recognition system, which is built on driver’s license photos, Ohio Attorney General Dave Yost required a training program for any law enforcement officer using the database and ongoing auditing of the system.
In 2017, Vermont disallowed the search of its driver’s license databases by facial recognition after it became clear the program was illegal under state law. There is an ongoing debate to bring the practice back.
In Michigan, state police have a policy requiring an audit of its system, which it has made available to researchers. The Seattle Police Department, working with the American Civil Liberties Union of Washington, developed a policy in 2016 that allowed for the technology’s use, so long as it was regularly audited and was never used for real-time tracking.
At the national level, Congress has also taken up the issue. At a series of hearings earlier this year in the House of Representatives, there was bipartisan concern about the technology and interest in legislation. However, for the moment, no particular piece of legislation is likely to become law.
As governments decide how to manage facial recognition, the technology has made significant leaps in recent years due to the efficacy of deep learning, an artificial intelligence method, and greater access to face image databases, like those created by departments of motor vehicles around the country. With greater ability to analyze more faces faster, the technology is increasingly desirable to law enforcement.
“The most common way that law enforcement agencies use face recognition technology is to conduct investigations,” says Clare Garvie, senior associate at the Center on Privacy and Technology at Georgetown University Law Center. (Disclosure: The author is an adjunct professor of law at Georgetown.)
It is not known how many cases or investigations have used facial recognition, but its application comes in two main forms: running a photo against a database of faces—similar to running fingerprints—and conducting a real- or near real-time analysis of video footage.
Broadly speaking, each application starts with an image of a sought-after person. Law enforcement can leave the photo as is or manipulate it to alter the angle or a feature of the person’s face, which can affect the accuracy of the search.
The software then creates a “faceprint,” which marks geographic features, like eye position and distance between cheekbones, by analyzing tens of thousands of pixels. A faceprint is then compared against databases of other faceprints or live video footage for potential identification.
These comparisons do not identify exact matches. Instead, a statistical likelihood is provided, which indicates to what degree two faceprints are likely similar. A system may provide numerous potential matches at various levels of strength.
Both the Department of Homeland Security and the FBI have touted successful applications of facial recognition in uncovering passport fraud or finding a wanted fugitive. However, the technology has produced mixed results in other jurisdictions.
The New York City Metropolitan Transportation Authority piloted a facial recognition program to monitor the Robert F. Kennedy Bridge in 2018. An internal email obtained by the Wall Street Journal said the “initial period for the proof of concept testing at the RFK for facial recognition has been completed and failed with no faces (0%) being detected within acceptable parameters.” The city continues to test facial recognition at its five bridges and two tunnels.
The NYPD also uses facial recognition, which has led to nearly 3,000 arrests over 5 1/2 years, according to a 2019 report from Georgetown University Law Center.
In 2017, Orlando, Florida’s police department tested Amazon Rekognition, a cloud-based facial recognition software. However, the pilot never got off the ground because the city’s security cameras were not compatible with the software and bandwidth was too slow. The city’s relationship with Amazon ended in 2019.
Compared with other jurisdictions, Washington County, Oregon, west of Portland, is relatively reserved in its use and expectations of facial recognition.
“It’s not like the be-all-end-all that is going to change law enforcement, from what I’ve seen and what we’ve done,” says Sgt. Danny DiPietro of the Washington County Sheriff’s Office. Built on Amazon Rekognition, the program permits an investigator to run a photo against the county’s mugshot database, which has about 300,000 photos in it, according to DiPietro.
The program is used in limited circumstances, like analyzing photos from crime scenes, confirming identification of a person in the field or if there is a threat to human life, as stated in an internal policy from January 2019. Since its deployment in 2017, it led to at least one arrest for a theft at a business. In 2018, the first full year of the program, it was queried about 1,000 times, DiPietro says.
He emphasizes, however, that the technology is not perfect and that it should only be used to generate leads. “If you get a match, that is not probable cause to arrest an individual,” he notes.
Garvie at Georgetown says that treating facial recognition matches as an investigative lead, and not the basis for a warrant, helps avoid legal scrutiny at trial. She adds that in many jurisdictions it is unclear what level of corroboration is needed to make the jump from lead to arrest.
Right to Disclosure?
Even if facial recognition is used only as a lead, there is a distinct need for transparency during a prosecution, which a recent appeal from Florida underscores.
In September 2015, two undercover officers in Jacksonville, Florida, posed as addicts looking to score drugs when they were unexpectedly stopped by an older black man who called himself “Midnight.”
In the ensuing moments, one officer bought $50 worth of crack cocaine from the man while the other quietly took photos of “Midnight” on an old tracphone. Once the deal was done, the officers drove off.
Not knowing the seller’s real identity, the grainy photos taken during the sale—along with the seller’s race, gender and pseudonym—were run through the Face Analysis Comparison Examination System (FACES), a facial recognition program operated by the Pinellas County, Florida, Sheriff’s Office. Launched in 2001, it’s available to law enforcement across the state and to federal agencies. The system receives up to 8,000 searches a month, according to research from Georgetown University Law Center.
Among multiple potential matches, the machine handed back the name and information of Willie Allen Lynch, a local man with a rap sheet.
After being indentified by the two officers, Lynch was arrested and charged. With no mention of facial recognition in pretrial disclosures, it wasn’t until eight days before the trial that Lynch’s attorney learned about the use of FACES.
With the process secret and the technology opaque, Lynch didn’t know if his photo had been manipulated by analysts, what the comparative quality of other potential matches were or vitals of the program itself, including error rates or if the software is audited.
“That whole set of information is really crucial and necessary for defense attorneys to do their job and for courts to do their job to tell if this is an accurate tool or junk science that’s being hidden behind a black box algorithm,” says Nate Freed Wessler, a staff attorney at the ACLU Speech, Privacy and Technology Project.
By way of a Brady disclosure—a constitutional requirement that has prosecutors hand over exculpatory information to the defense—it is standard practice to disclose other matches generated by a traditional police lineup or photo arrays, yet the trial court denied the defense’s attempt to compel these documents.
“I think Brady is one of the most critical issues with facial recognition technology,” says Andrew Ferguson, visiting professor of law at American University Washington College of Law. “If this tech is going to be used—we know it’s new, we know it’s not fully tested—all of the underlying processes need to be used with the upmost transparency or not used at all.”
The Jacksonville Sheriff’s Office declined to be interviewed for this article. Attorneys for the state and Lynch did not respond to requests for comment.
Having appealed the trial court’s decision, in December 2018, a three-judge panel for the 1st District Court of Appeal of Florida found against Lynch because he failed “to show ‘that there is a reasonable probability that the result of the trial would have been different if the suppressed documents had been disclosed to the defense.’ ”
Denied the opportunity to challenge the technology used to name him as a suspect, Lynch now serves eight-years in prison.
Need for Regulation
While Brady disclosures are existing legal safeguards that can improve the transparency and fairness of the technology’s growing use, the needed legal infrastructure is lacking when it comes to protecting people’s privacy from some facial recognition searches. Allowing police the ability to track a person in real time, Ferguson says, “The current Fourth Amendment largely fails to regulate the future of facial recognition.”
Like fingerprints or DNA left on a discarded cup, an individual’s face in public isn’t protected under current Fourth Amendment doctrine, which governs search and seizure.
Ferguson thinks, however, that the U.S. Supreme Court’s recent digital privacy cases—U.S. v. Jones and Carpenter v. U.S.—portend a new path. Both cases regarded the public movement of two men being investigated for a crime, tracked through warrantless uses of GPS and cellphone geolocation data, respectively. In both cases, the court determined that a warrant was needed for such surveillance.
In Carpenter, the 5-4 majority opinion, written by Chief Justice John Roberts, concerned itself with issues raised by the collection and aggregation of personal data.
“A cellphone faithfully follows its owner beyond public thoroughfares and into private residences, doctor’s offices, political headquarters and other potentially revealing locales,” he wrote. “Accordingly, when the government tracks the location of a cellphone it achieves near perfect surveillance, as if it had attached an ankle monitor to the phone’s user.”
With faces being permanently affixed and more ubiquitous than cellphones, it’s possible that this logic demands a warrant when authorities want to track someone in public through facial recognition, reasons Ferguson.
As courts await these issues, some researchers worry that, regardless of legal protections, facial recognition technology has a racial and gender bias problem.
“Government-sponsored studies in the last six months reveal commercial systems have varying performance results depending on skin properties, age and gender,” says Joy Buolamwini, a researcher at MIT, in a statement. “In general, these studies indicate people of color, children and women-identified individuals have greater misidentification risks than their counterparts.”
In her 2018 dissertation research, Buolamwini and Timnit Gebru at Microsoft Research found that major face recognition software players, like IBM, Microsoft and China’s Face++, were 8.1% to 20.6% less accurate when comparing women versus men’s faces. For Face++, 95.9% of misgendered faces were of women. When it came to race, identifying darker faces created an error rate of 11.8% to 19.2% higher than when identifying faces with lighter complexion.
In a criminal justice system that already disproportionately arrests and incarcerates black and brown people, this is seen by many, like Wessler at the ACLU, as an aggravating factor.
However, to Peter Trepp, CEO of FaceFirst, a facial recognition company, these concerns are overblown.
“A lot of people miss the value of facial recognition because they’ve got a lot of questions about how it actually works,” he says, acknowledging the need for “guardrails” to keep the technology in check. In this knowledge gap, he believes people are receiving incorrect information about the technology.
Trepp, whose company currently does not take law enforcement contracts, says the misconceptions can come from relying on outdated studies in this rapidly changing field or misunderstanding how the technology functions. In particular, he says these tools are not racist or solely developed by white men—a common critique–and that his platform doesn’t have the accuracy gaps noted in research like Buolamwini’s.
The Security Industry Association, a trade group, recently released a report dispelling what it sees as myths surrounding the technology. The authors argue that facial recognition is not operating lawlessly, but within the existing legal and constitutional frameworks that oversee criminal procedure generally. It also took aim at the notion that facial recognition struggles to identify darker faces or women by citing research from the Florida Institute of Technology and the University of Notre Dame, which found that four face matching programs tested did not show a discrepancy in accuracy.
Just the First Step?
Without a national moratorium, this technology will continue to be used in criminal investigations. And as the technology and the infrastructure supporting it improves, its application and impact will expand.
For example, even with police body camera company Axon’s decision to hold off on using the technology from this summer, facial recognition may still be used in body cameras. Digital Barriers, a body camera company in the U.K., has real-time video with integrated facial recognition. Motorola, also a police body camera manufacturer, has a subsidiary, Avigilon, which has facial recognition capabilities.
In China, police officers are equipped with such body cameras, however, Garvie at Georgetown is unaware of this technology being deployed in the U.S. Regardless, the California State Senate recently passed a bill to preemptively ban it, and the California State Assembly is currently debating it. As of press time, the bill had not become law in the state.
Beyond the police, local governments are investing heavily in “smart city” infrastructure, like 5G connectivity, which will put more local government functions online and increase data collection.
“I think what the smart city does, it both accelerates things like facial recognition,” says Frank Pasquale, law professor at the University of Maryland Francis King Carey School of Law, “and it also creates a future where—even if you ban facial recognition—there are all these other forms of recognition that are also going to be incredibly important and that can do the same thing.”
He points to inventions like “smart pavement,” which can track movement based on a person’s gait, and a laser developed by the Department of Defense that can identify people from a distance by their heartbeat, a biometric signature. While neither are widely deployed, both raise similar disclosure and privacy issues as facial recognition.
Trepp at FaceFirst feels that facial recognition is fighting a headwind in the current political climate, but he’s optimistic about the future.
Looking back about 20 years ago when “dumb cameras” and CCTV proliferated, people also worried their privacy was being eroded, he says.
“Everyone got over all that, and I think the same is going to happen with facial recognition,” he says, “but I think it’s going to take some time.”
Corrected Sept. 27 to indicate that New York is not considering a ban for law enforcement but for other actors.