Artificial Intelligence & Robotics

Latest version of ChatGPT aces bar exam with score nearing 90th percentile

  •  
  •  
  •  
  •  
  • Print.

ChatGPT

Image from Shutterstock.

The latest version of the artificial intelligence program ChatGPT has passed the Uniform Bar Examination by “a significant margin,” earning a combined score of 297 that surpasses even the high threshold of 273 set by Arizona.

GPT-4 took all sections of the July 2022 bar exam and earned a score so high that it approaches the 90th percentile of test-takers, according to researchers Daniel Martin Katz, a professor at the Illinois Institute of Technology’s Chicago-Kent College of Law, and Michael James Bommarito, a professor at the Michigan State University College of Law.

“Our analysis highlights that GPT-4 has indeed passed the bar and has done so by a significant margin,” they wrote in a paper posted March 15 available here. The professors collaborated with legal AI company Casetext, according to March press releases here and here.

GPT-4 took all sections of the bar exam and did particularly well on the multiple-choice section known as the Multistate Bar Examination. GPT-4 got 75.7% of the questions right on the multiple-choice MBE, compared to the human average of 68%.

GPT-4 got a passing grade in all seven subjects tested on the MBE, doing best in contracts (answering 88.1% of the questions correctly), followed by evidence (85.2%) and criminal law and procedure (81.1%).

Two of the researchers assigned scores to the essay questions on the Multistate Essay Examination, and they also got input from peers.

“While GPT-4 performs well on many questions, its output is not completely free of errors,” the researchers concluded.

Yet GPT-4 got a score of 4.2 out of 6 points. Most jurisdictions use the same scale, and a score of 4 is considered passing.

On the Multistate Performance Test, GPT-4 also received a score of 4.2 out of 6 points.

“We were somewhat surprised at the quality of the output which was generated,” the researchers wrote.

The previous version of ChatGPT didn’t do as well on the exam. It flunked the multiple-choice section, but it did earn passing grades in evidence and torts.

Publications covering the study include Above the Law and Reuters.

See also:

ABAJournal.com: “Can ChatGPT help law students learn to write better?”

Give us feedback, share a story tip or update, or report an error.