ABA Journal

The New Normal

Does machine-learning-powered software make good research decisions? Lawyers can’t know for sure


By BRIAN SHEPPARD

  •  
  •  
  •  
  •  
  • Print.

Brian Sheppard

Brian Sheppard.

Do lawyers need to know what they are doing? If you asked sellers of legal research products, their answer might be “mostly.” New technology is changing just how much of the legal research process is hidden from view. The next question sellers must ask is whether the secrecy will pay off.

The days when legal research companies could rest on their ability to provide access to materials are over. To compete, each company has two choices: make its product cheaper or make its searches better than those of its competitors. Because the first alternative could set off a price war, companies are fiercely trying to improve their searches, and they increasingly believe that a technology called machine learning is the way to do it.

Machine learning allows machines to change the way that they work without new, external programming. Programs can, in a sense, improve themselves.

The problem is that few lawyers understand what machine learning can do for them, let alone how it works. This is due in part to the guarded manner in which companies implement it. Machine learning is powered by complex proprietary algorithms, which companies keep under wraps and frequently change.

Secrecy makes it harder for consumers to realize the full benefits of a competitive marketplace. Lawyers will struggle to make educated choices between programs, stifling their ability to choose the best product for themselves and for their clients. Making matters worse, their lack of understanding will stifle the improvement of the algorithms, themselves.

A primer on algorithm-powered legal search

An algorithm is a set of rules that a machine will follow. When lawyers perform e-research, they input information into a search field. Algorithms shape how computers interpret that information, which alters the results of the search. They might change how many cases are selected, which cases they are, and in what order.

Algorithms play a crucial role in natural language searching. They change the substance of the search itself, perhaps placing greater weight on certain words or supplementing the search with synonymous or logically related terms. Indeed, the bleeding edge of search innovation seeks to incorporate the semantic or conceptual relations between words and phrases.

For the natural language search “duties of truck drivers”, the algorithm might add to the search terms like “responsibility” or “commercial motor vehicle operator,” or it might favor cases that use “truck driver” frequently over cases that use “duty” frequently. These algorithmic functions happen behind the curtain, and they are subject to the strategic decisions of the product designer—decisions that need to be made even before the search even happens.

Compare this to good old terms-and-connectors searching. The search

dut! /s ‘truck driver”

will retrieve all cases within the selected database containing words beginning with the root “dut” that are in the same sentences as the phrase “truck driver.” Nothing more. Nothing less.

Algorithm-powered searches can improve a search, but they can make important search decisions automatically and without the searcher’s knowledge. As a result, they lack transparency, particularly compared to terms-and-connectors searches.

But does transparency even matter? Lawyers have been performing natural language searches for a long time without issue.

It might matter.

The increasing stakes of transparency

It could be that natural-language searching hasn’t yet caused trouble because terms-and-connectors searching currently acts as a safety net. Lawyers might follow up their natural-language searches with terms-and-connectors searches just in case they missed something. A well-respected 2013 survey (PDF) provided evidence that terms-and-connectors searching remains popular. Only about one-third of lawyers surveyed did it “occasionally” or less.

But this is changing. In that same survey, the frequency of terms-and-connectors searching correlated positively with lawyer experience. This suggests that young lawyers are less inclined to do it.

Market conditions will accelerate the transition toward natural language search. Since all terms-and-connectors searches work the same regardless of the product, leading companies have pushed aggressively towards devising their own secret recipe to power natural-language searching. The most promising method to generate that recipe is machine learning.

A primer on machine learning

In search applications, machine learning uses feedback from users and other data to change the way that results are collected, selected, and ranked by the algorithm.

Imagine for the natural-language search “copyright infringement” a search algorithm has been preprogrammed to identify the 100 cases that use the term “copyright infringement” most frequently and to rank them with the highest frequency first. For this algorithm to be perfect, it would have to place the most relevant case first, the second most relevant second, and so on.

Now imagine the search algorithm is powered by a machine-learning program that seeks to determine the degree to which the results provided deviate from perfection. To do this, it will monitor user interactions with the computer.

Assume that the lawyer who searched “copyright infringement” clicked on the first case but didn’t print it out, skipped the second case, printed out the third, and didn’t scroll any further. The program could interpret the behavior as evidence that the search algorithm needs adjustment; the case she printed out is probably her favorite, but it came in third place. It could then examine the textual features of the favored and disfavored cases for patterns that reveal relationships between her search terms and the words in her search results. If enough users enter the same search and respond the same way, it might incorporate those patterns into the search algorithm. Thereafter, future users who search “copyright infringement” could see that formerly third-place case in the first position. The machine has “learned” without needing a human being to reprogram it.

There are multiple ways that machine learning occurs and in a variety of contexts, some of which do not involve customer feedback. But even our simple example sheds light on the strengths and weaknesses of machine learning for legal search. It shows that machine learning has the capacity to improve the user experience by watching what users do. It also shows that a program’s improvement is connected to how good its users’ research is.

Transparency and good legal research

This raises two important questions: What is good legal research? And how do we know when it has happened?

Determining whether a lawyer understood and prioritized cases correctly is no simple matter; legal problems are not math problems. For one, identifying whether the top-ranked case was the best one for a particular lawyer’s case requires knowledge of the facts of the lawyer’s case. Facts are tough for research companies to access.

There is an alternative: Rather than assessing the quality of legal research on their own, companies can pass the buck to their users, trusting that they know when their program is doing a good job.

Delegating evaluation to consumers makes the second question the important one.

As mentioned, programs can monitor when individual users click on results, scroll down, download, email, print, perform further searches, and the like. Some companies go further, however, interrupting searches by asking users to provide feedback. Ross Intelligence, for example, has asked users whether the files they view are relevant before the users close them. The user can click a thumb up or down.

But is this a reliable way of assessing quality? Does printing out one case but not another reliably show that the former is more relevant? Perhaps, but user interactions generally occur before the users see research consequences, making it harder for them to know whether their search results are paying off. They know only the cases they have read, not yet aware of those that they missed.

For this reason, several companies opt to employ human experts as a safeguard to keep the learning process from going awry.

“Regarding the quality of the data that goes into developing a machine learning model, it is a well-known rule of thumb that a company should expect to spend 80 to 90 percent of the time on a machine-learning project in data acquisition, cleansing and review,” explained Robert Kingan, data scientist at Bloomberg Law. Employing humans cuts down the costs saved by automation, however, and it might be too small a measure to be an adequate safeguard.

The machine-learning process would likely improve if lawyers understood what was happening during their natural-language searches. They would get some notice that algorithmic choices are about to send their research off course and make a correction. Reality isn’t so kind: They cannot see the search algorithm; nor can they see how the search algorithm changes under machine learning.

Even if lawyers could view the algorithms, they would struggle to understand them. The companies that own the algorithms have trouble knowing exactly what their algorithms are doing.

“Many machine learning techniques result in models of the data that consist of, say, hundreds of thousands to millions of numerical weights used to determine how input data is transformed to output. One can apply tests to such an algorithm and review examples from ‘gold standard’ training data to get a feel for how it behaves, but it may be impossible to interpret the algorithm itself in human terms,” Kingan says. Those human experts face a challenge.

To some extent, this situation leaves both lawyers and research companies fumbling in the dark: Lawyers don’t have a complete picture of what is happening, and research companies are relying on the lawyers to teach their machines.

Companies hope lawyers will be able to tell when a company has made better research decisions than its competitors, but they don’t want those lawyers to be able to tell what those decisions are. They are apparently willing to accept the price of secrecy, even if it includes slowing innovative progress.

Still, companies should be concerned that lawyers will have too much trouble distinguishing products from each other and conclude that all competitors are all pretty much the same. Perhaps anticipating that problem, many leading companies now focus on things that are easy for lawyers to judge—time and simplicity.

Racing in the dark

Companies want to show that their natural-language searching is more efficient than their competitors’ are. One tactic is to shorten the stack of potentially relevant cases for lawyers to wade through. The trend is for companies to emphasize “answers” over the comprehensiveness of cases provided (see here and here and here). Ross Intelligence claims on its homepage that they “will use A.I. to find you answers from the law in seconds—no more fumbling with boolean queries and thousands of keyword-based results.”

We might be witnessing the start of an efficiency race, with competitors speeding towards ever-shorter results based on ever-simpler inputs.

The broader consequences of a race are unclear. On the one hand, it could increase productivity, potentially leading to reduced lawyers’ fees. On the other hand, it could increase error, which might occur when overaggressive, secret algorithmic choices cut out vital cases.

Of course, if algorithms were transparent, this second scenario would be less likely to happen. But it might introduce a dangerous problem: Transparency would make search results easier to manipulate. Just as businesses engage in search engine optimization to propel them to the top of Google, sophisticated users could engage in tactics to maximize the likelihood that cases disappear from legal search results. While this might occur anyway, knowing just how the algorithms work makes it far easier to stack the deck.

It doesn’t matter: legal research companies would almost certainly suffer if there were complete transparency, and they are the ones making the decisions.

Transparency risks that competitors would coalesce around the best search method. The resulting lack of differentiation could lead to a price war, which sellers desperately want to avoid.

A price war, however, might benefit lawyers and their clients. One of the great promises of machine learning is that in making legal research more efficient, legal fees will drop. But they will drop further still if the price for legal research products goes down with them.


Brian Sheppard is a professor of law at Seton Hall University School of Law. His research considers the relationship between the way law is expressed or researched and behavior. He is a frequent writer on legal technology.

In This Podcast:

Give us feedback, share a story tip or update, or report an error.