Law Scribbler

The rise of the machines—but with checks and balances

  •  
  •  
  •  
  •  
  • Print.

Jason Tashea

Jason Tashea. Photo by Saverio Truglia.

In May, San Francisco banned the governmental use of facial recognition technology.

With an eye toward law enforcement’s adoption of the invasive software, the board of supervisors unanimously banned it because of documented bias against people with dark skin and privacy concerns that raised the specter of Big Brother.

While these considerations are legitimate and the approach novel, San Francisco’s reaction is not surprising.

The U.S. has been the world leader in AI development, including those used in the criminal justice system, but we are laggards when it comes to the regulation and oversight of the same technology. While opposite outcomes, San Francisco’s ban is not much different than the hands-off approach taken by other jurisdictions: Both fail to grapple with the fundamental and nuanced issues created by the deployment of AI in the criminal justice system.

Fortunately, a new report from the Law Society of England and Wales, “Algorithms in the Criminal Justice System,” lays out a thoughtful alternative that creates legal structures and procedures around AI in the criminal justice system that protect due process and the rule of law while allowing for technology’s experimentation.

Like in the U.K., AI is proliferating at all levels of the U.S. justice system. Risk assessments assist in bail, sentencing and parole decisions. Police are dispatched into communities with an algorithm’s insistence. And facial recognition is being rolled out by law enforcement at the federal, state and local levels.

The challenges and harms of these technologies are well-documented. Facial recognition and risk assessments show racial bias. Complex algorithms are not built to “explain” their conclusions, which closes a part of an otherwise open court process. Even if AI software is “explainable,” private companies shield their software from scrutiny by claiming it as a trade secret—despite being used by a public agency.

These challenges are compounded in the U.S. because federal and state lawmakers are using algorithms as a public policy crutch.

At the federal level, the First Step Act, passed in 2018, expects to release more people from federal prison with the assistance of a risk assessment tool. In a similar vein, California passed a major bail reform bill–SB-10—that is now on the ballot. If ratified, the law would require local agencies to use risk assessment tools in lieu of cash bail.

In both cases, drafters of these otherwise decent laws made the bet that an algorithm can stand in for existing processes and policy choices. At the same time, neither law provides legal standards on how the tool should be built, what oversight and transparency are needed or how to assess an algorithms efficacy, including its impact on the legal rights of the accused.

Handing off this type of rule-making to an agency is standard in legislation; however, there’s evidence that agencies are also being deferential to the technology. In New York, for example, the Department of Corrections and Community Supervision has put in multiple layers of bureaucracy to limit human override of the agency’s risk assessment tool.

opinions expressed through math

This legislative and regulatory trend outsources decisions traditionally made by publicly accountable individuals to private companies. Algorithms are opinions expressed through math, and when an algorithm is used for a public purpose, everything from the problem definition to the data used to build the algorithm are public policy concerns. As the report from London notes, the value-based decisions that ultimately make up an algorithm are “usually not between a ‘bad’ and a ‘good’ outcome, but between different values that are societally held to be of similar importance.”

Take for example, defining fairness when using a risk assessment for bail. As University of Pennsylvania criminology professor Richard Berk wrote with colleagues in 2017, there are six types of fairness and not all are compatible with each other. A government could decide that fairness is achieved when a tool provides the same accuracy for two protected groups, like men and women. Or fairness can be attained if the error rates are the same among groups, even though it might mean more men than women are incarcerated.

These two outcomes are not compatible, so a choice needs to be made. Depending on the community, either could be the “right” decision, but it’s not a decision to be left up to a software company.

Beyond the legislative and executive branches, unjustifiable trust in these tools extends to the judiciary. The Supreme Court of Wisconsin in 2016 decided that the lack of transparency of a risk assessment tool used at sentencing did not infringe on a defendant’s due process. A California appeals court in 2015 made a similar conclusion regarding DNA testing software, which was used to convict a man of rape and murder.

Collectively, these approaches to legislating and judicial decision making are regrettable—but fixable.

To be clear, the use of algorithms is not fundamentally the problem. The problem is the lack of accountability, effectiveness, transparency and competence surrounding these tools, as defined by the IEEE’s comprehensive principles on the ethical use of AI in legal systems.

computer artificial intelligence Image from Shutterstock.

AI best practices

To that end, lawmakers and judges must stop abdicating their role overseeing AI in the criminal justice system. As the report from London says: “The lawful basis of all algorithmic systems in the criminal justice system must be clear and explicitly declared in advance.”

Governments of all sizes in the U.S. need to think about how they will enshrine best practices of AI deployed in the criminal justice system, including issues traditionally missing from this discussion, like cybersecurity, as the recent U.S. Customs and Border Patrol breach of face images shows.

There also needs to be a legal mandate to audit both the algorithms behind these tools and the data used to “train” them. To do this well, privacy considerations of those in the datasets should be considered.

As for agencies, they need to update procurement policies to keep value-laden decision-making within the auspices of the government, create transparency and allow for constant scrutiny for civil rights violations, as recommendation four of the report lays out. If SB-10 becomes law, there is a ripe opportunity for the Judicial Council of California to take this specific recommendation to heart.

These proposals, if implemented, are not the end of the discussion. Sunset clauses need to be baked into these new laws. Doing so, according to the Law Society, will force lawmakers to continually review existing laws in light of changing technologies and best practices, allowing for updates and fixes to previous iterations.

The report also recommends developing governmental capacity to better understand where an AI system may be appropriate and how to deploy it. To help federal lawmakers understand technology without partisan or corporate interference, the U.S. Office of Technology Assessment, closed by Congress in 1995, must be reopened and fully funded. This will inform policymaking across the technology spectrum. Similar in-house education and capacity building should also be considered by state legislatures and judiciaries.

Collectively, these proposed policies create the rules and strictures that allow for the safe experimentation of new technologies while holding the rule of law and civil liberties above all else. Not only safeguarding legal principles, it will create greater understanding and trust in the technologies that pass muster.

Currently, the U.S. is on a path of resignation when it comes to AI in the criminal justice system. Banning it, like in San Francisco, is as blunt as a hands-off approach is feckless—both are insufficient. It’s time for U.S. policymakers to take the road less traveled and hold AI deployed in the criminal justice system accountable.

Jason Tashea is the author of the Law Scribbler column and a legal affairs writer for the ABA Journal. Follow him on Twitter @LawScribbler.

Give us feedback, share a story tip or update, or report an error.