Is predictive coding better than lawyers at document review?
A lawyer who pushed for and obtained a judge’s order mandating predictive coding in a civil dispute is pleased with the computer’s findings.
The results suggest a computerized document search using predictive coding may surpass human ability, lawyer Thomas Gricks tells the Wall Street Journal Law Blog (sub. req.).
Last June, a judge in Loudon County, Va., granted Gricks’ predictive coding motion in a dispute over a collapsed roof at an aircraft hangar. It was the first time a judge mandated predictive coding despite the objection of one of the parties, said Gricks, who chairs the e-discovery practice group at Schnader Harrison Segal & Lewis.
The e-discovery process got under way when lawyers coded a sample of 5,000 documents out of 1.3 million as either relevant or irrelevant. The information was then used to develop algorithms for a computer search of the remaining documents. The program turned up about 173,000 documents deemed relevant.
To see how well the computer program worked, the lawyers checked a sample of about 400 documents deemed relevant by the computer program. About 80 percent were indeed relevant. The lawyers then checked a sample of the documents deemed irrelevant. About 2.9 percent were possibly relevant. The statistics mean that about 81 percent of all relevant documents were found.
“For some this may be hard to stomach,” the Law Blog says. “The finding suggests that more than 31,000 documents may be relevant to the litigation but won’t get turned over to the other side. What if the smoking gun is among them?”
But the chances of an undiscovered document may be greater if humans do the document review, according to a 2011 article in the Richmond Journal of Law and Technology. It noted research showing predictive coding finds about 77 percent of relevant documents on average, while humans find only about 60 percent.
ABA Journal: “Beyond Prediction: Technology-Assisted Review Enters the Lexicon”