Business of Law

Artificial intelligence has made great inroads, but hasn't yet increased access to civil justice

  • Print

Ai Graphic in Courthouse

Illustration by Sara Wadford/ABA Journal

When it comes to civil law, artificial intelligence identifies suitable jurors, speeds legal research, predicts judicial outcomes, and promises cheaper and faster electronic discovery. These AI use cases are becoming table stakes in litigation support software for law firms and corporate legal departments.

But when it comes to helping pro se litigants navigate the complex and intimidating civil court structure, AI isn’t helping them access court procedures and legal documents.

According to the Legal Services Corp., 86% of the civil legal needs of low-income Americans received inadequate or no aid. On average, close to 50% of all cases filed in the U.S. Courts of Appeals since 1995 were pro se. In 2019, the National Center for State Courts reported from anecdotal data that 75% or more civil cases in state and local courts have at least one self-represented litigant.

These pro se litigants need detailed information about their legal rights, how courts work, filing documents and handling their cases. They are draining court resources already hampered by financial constraints and manual processes.

With AI using data to improve customer experience in other industries—from banking and retail to consumer electronics and transportation—can it enhance access to justice in civil court?

There are existing AI products attempting to assist in civil courts

The NCSC identifies several AI technologies in civil courts that can enhance courts’ access, including natural language processing, machine learning and chatbots. NLP is used to generate documents or get legal answers from guided questionnaires that follow decision trees of legal or business rules. Legal navigators such as Florida Law Help and the Colorado Resource Network assist pro se litigants in identifying legal issues, drafting and answering complaints and filing court documents.

Because of COVID-19, many courts consider e-filing an access-to-justice issue. In Palm Beach County, Florida, courts use machine learning supplied by Apopka, Florida-based Computing System Innovations, as well as optical character resolution to scan and ingest e-filed documents and automatically docket them.

Henry Sal, CSI’s founder and CEO, says the company’s Intellidact software automatically separates, analyzes and classifies e-filed documents by type and docket code. Intellidact AI extracts data specific for each docket code, transforming unstructured document text into structured content. Software bots then perform the data entry, updating in a court’s case management system.

According to Sal, Intellidact can automatically process 75%-80% of documents filed in a case management system without human intervention.“The balance requires human intervention because of OCR errors, or the software has not seen the document before,” Sal says. “The same processing engine can identify and redact privacy protection at no additional cost.”

The software also automates Tyler Technologies’ Odyssey File & Serve portals, which handles e-filing, decreases file processing time and makes documents immediately available to parties and the court. The first Intellidact projects under Tyler went live in courts in Stanislaus County, California, and Tarrant County, Texas, in January.

Since texting is popular among court users, machine learning and NLP technology can develop text-based chatbots to handle public inquiries and increase court access.

The New Jersey Courts launched a chatbot in 2019 called the Judiciary Information Assistant. The courts fed it Q&As, website FAQs, manuals, operating procedures and other court information to compile more than 10,000 question-and-answer pairings. JIA uses artificial intelligence to answer commonly asked questions by guiding users to specified court and legal topics—from attorney registration to tax—on the court’s home page. Once JIA directs users to an answer, users can ask additional questions in free text or return to the main menu.

The Superior Court of California in Los Angeles County rolled out the LACourtConnect chatbot in June 2020 to automate the first level of support for remote hearings. Using Microsoft Azure Cognitive Services, the goal was to produce the same service level in answering users’ queries and reduce the volume of calls to support agents.

Nevertheless, the court found training the informational bot in artificial intelligence very time-consuming.

To get the bot up and running quickly and efficiently, the court designed it along the same lines as the chatbot used to order Domino’s Pizza. The LACC chatbot uses preliminary or guiding questions to lead users to the right answers from a knowledge base of 100 questions based on user guides and FAQs.

The United States is behind the curve compared to many other countries using AI in courts

When it comes to integrating AI with civil courts to better serve the public, the U.S. is behind other countries, including Brazil and China.

Brazil’s Superior Court of Justice uses an AI system called VICTOR to address the court’s backlog of petitions. According to the 2017 Brazilian Federal Supreme Court Activity Report, the court issued 126,531 decisions and registered its lowest final collection of pending cases of the last five years (45,437). VICTOR reduces the initial analysis of petitions from 30 minutes to five seconds.

Chinese courts embrace AI in trials and verdicts, according to the Supreme People’s Court. As early as 2016, the high court in Hebei province introduced the “smart court” concept.

The smart court includes electronic case filing with OCR capabilities; case party identification and automated document production; and it delivers related laws, regulations and authoritative cases to judges.

Many U.S. courts cannot develop and deploy AI because they lack digital processes and data flowing from e-file and modern case management systems.

Alan Carlson

Alan Carlson argues courts need to move to e-fi ling before deploying AI. Photo courtesy of Alan Carlson

“How can you build a model where there’s no data?” asks Alan Carlson, a court management consultant and former court executive in Orange and San Francisco counties in California. “Even those courts with labeled or structured data to train AI systems need to clean the data before they use it, and that takes time and money.”

He adds:“The difficulty with data includes the insufficient size of available data sets, the absence of data standards, data integration, and data privacy and security.”

Beyond the data problems, courts face technological challenges such as the lack of interoperability and AI algorithms’ transparency. Then there are the skills to ask data-oriented questions and the ethical challenges of replacing human judgment with machine inferences and determining the responsibility for errors using AI.

Is there a solution? “It’s not just an AI thing,” Carlson says. “Courts must first move to e-filing.”

CSI’s Sal adds that with e-filing, “courts need to trust that AI algorithms can provide higher-quality output from reading documents than humans. If you have 100% accurate text and no OCR errors, you will get 100% [accurate] extraction and classification of documents. The best OCR technology is still 92% accurate.”

“AI is slowly getting integrated into various places,” Carlson says. For example, Tyler Technologies acquired Modria, an online dispute resolution system, and incorporated the online dispute resolution software into Odyssey Court Solutions.

Colin Rule

Colin Rule: “Although there are forward-looking courts, they are not built to innovate.” Photo courtesy of Colin Rule

“There is faster innovation in private dispute resolution because we don’t have to worry about getting judges signed on or approvals through the courts,” says Colin Rule, president and CEO of and co-founder of Modria. Although private dispute resolution providers do not yet use digital judges, they “are doing interesting things with machine learning and AI to score cases, guide parties and present a zone of potential agreement,” he says.

Rule adds there is a lot that technology can do to structure negotiation and educate parties. Modria created online workspaces where parties in a court case could work out a mutual agreement.

“Although there are forward-looking courts, they are not built to innovate,” Rule says, pointing out that judges and courts are a monopoly and are immune to pressure to innovate due to lack of competition.

Artificial intelligence needs to prove its trustworthiness to court users

Attorneys, courts and the judiciary operate on trust. Without it, there is no avenue for self-represented litigants to access justice. The same is true of AI in courts.

Nicolas Economou

Nicolas Economou: “Trustworthy adoption of AI means adoption based on sound evidence.” Photo courtesy of H5

“There is not a single area of the legal system where AI does not have the potential to advance the functions of the law and the values that animate the law. It all comes back to ensuring trustworthiness,” says Nicolas Economou, who is CEO of H5, chair of the Future Society’s Law & Society Initiative and chair of the law committees of the Institute of Electrical and Electronics Engineers’ global initiative on ethics of autonomous and intelligent systems.

“Trustworthy adoption of AI means adoption based on sound evidence of the extent to which AI-enabled processes in the legal and judicial domain do, in fact, advance justice,” Economou says. Like the trustworthy adoption of drugs or surgical procedures, he adds there must be evidence that AI is effective at achieving a specified objective without undue risk.

In the opening remarks at the Athens Roundtable on Artificial Intelligence and the Rule of Law in November, Judge Isabela Ferrari of the federal court in Rio de Janeiro called on regulators to develop annual benchmarking programs for AI applications in legal and judicial systems to produce trustworthy and transparent evidence accessible to all on whether specific legal and judicial AI applications effectively meet desired objectives. Otherwise, “We can only turn to marketing materials and studies of dubious qualities,” she warned.

Court users must trust that AI and the judicial agents produce desirable outcomes and that they can be held accountable if not.

“This requires an operative definition of desirable outcomes and the ability to measure the extent to which these are met,” Economou says. “This is not that hard. If we can achieve trustworthy technological adoption—or, equally importantly, avoidance of adoption—in medicine and aviation, we can surely do so for AI in the legal domain.”

This story was originally published in the April/May 2021 issue of the ABA Journal under the headline: “Toward Smarter Courts: Artificial intelligence has made great inroads—but not as far as increasing access to civil justice.”

Give us feedback, share a story tip or update, or report an error.