Posted Jun 05, 2012 10:30 am CDT
Databases of historical legal information are being built that could help lawyers calculate the odds of winning a case and how to craft the arguments.
Algorithms could be used to make predictions based on the historical data, Law Technology News reports. “Called quantitative legal prediction, it’s basically what happens when the latest technology trend—called ‘big data’—meets the law,” the story says. “And it just might change how corporate general counsel and BigLaw manage legal matters and costs, how they craft legal arguments, and whether, how, and where they file a lawsuit.”
The article identifies these efforts already under way:
• Lex Machina has a database with information from 128,000 intellectual property cases that has been categorized, tagged and coded. The company has used the database to analyze settlement patterns and win rates.
• The nonprofit Harlan Institute, which promotes interest in the Supreme Court, is also investigating quantitative legal prediction. It’s an outgrowth of a Supreme Court fantasy league launched by lawyer Josh Blackman. He suggests in a law review article that it could be “quite conceivable for a bot to crawl through all of the filings in Pacer … and develop a comprehensive database of all aspects of how each court works.”
• E-billing and management vendor TyMetrix has been collecting data on billings and legal matters from its clients since 2009.
• Seyfarth Shaw is collecting data on the amount of time it takes to perform specific tasks to help price its legal services, drawing in part on TyMetrix data. The law firm has also collected information on U.S. Equal Employment Opportunity cases. When a client has an EEOC charge, the firm can evaluate risk by looking at data on the EEOC investigator and type of claim.