Legal Education

Is unauthorized artificial intelligence use in law school an honor code violation?

AI in the classroom illustration

“We have certainly been having issues with students using generative AI in a way that the faculty member does not view as appropriate,” says Benjamin Barros, the dean of the Stetson University College of Law in Florida. “I certainly don’t want students to find themselves in honor code situations inadvertently.” (Image from Shutterstock)

With generative artificial intelligence’s growing availability and acceptance into students’ workflow, some law schools are wondering whether unauthorized AI use should be an honor code violation—something that could potentially trip up aspiring lawyers in the character and fitness portion of the bar licensure process.

Unauthorized use of AI, seen as a type of plagiarism, has raised questions at the Stetson University College of Law in Florida regarding how best to handle the possible academic misconduct.

“We have certainly been having issues with students using generative AI in a way that the faculty member does not view as appropriate,” says Benjamin Barros, the Florida law school’s dean. “I certainly don’t want students to find themselves in honor code situations inadvertently.”

“We don’t have to go straight to the honor code, right? And that’s a real challenge right now that we’re all trying to figure this out,” he adds.

Before a candidate can be admitted to the bar, they must pass the character and fitness portion of bar licensure proceedings, says David L. Hudson Jr., an associate professor at the Belmont University College of Law in Tennessee, and they must show that clear and convincing evidence of good moral character is the most common burden of proof for applicants.

“An academic misconduct charge is one of the major red flags,” says Hudson, who represents bar applicants in character and fitness proceedings and often writes about legal ethics, including for the ABA Journal, “and are among the most serious type of hurdles to try to overcome.”

In Nevada, the Nevada Board of Bar Examiners treats honor code violations very seriously, and a candidate with a violation could potentially not be allowed to take the bar exam, Richard M. Trachok II, the chair of the Nevada Board of Bar Examiners, told the Journal.

Each case’s circumstances would be carefully considered, and the consequences would depend on what the professor’s specific instructions were to the students, he says.

“If those instructions said, ‘No use of AI,’ and somebody ignored those instructions, yeah, that’s blatant cheating,” Trachok adds. “That’s pretty tough.”

While noting that the Nevada Board of Bar Examiners has not yet encountered such a case, he says the disciplinary hearing process would mirror that of any plagiarism offense.

“We would refer to the character and fitness committee, who would be conducting an investigation,” Trachok says. “They would make ruling, and I would be the eventual person to make a determination of the sanction.”

If candidates protest their ruling, they could appeal to the Nevada Supreme Court, he adds.

Daniel Linna Jr. Daniel W. Linna Jr. of the Northwestern University Pritzker School of Law: “If you have a policy that you cannot use generative AI to do substantive course work and a student does a Google search, are they supposed to avert their eyes?” he says. “That’s generative AI at the top of the page.”

Lack of clarity

The problem stems from unclear AI policies within law schools and universities, says Daniel W. Linna Jr., a senior lecturer and the director of law and technology initiatives at the Northwestern University Pritzker School of Law in Illinois.

These cases “illustrate why these policies are problematic,” says Linna, a 2018 Journal Legal Rebel.

The vast majority of policies that Linna has seen at law schools don’t draw firm lines between what is and what isn’t acceptable.

“If you have a policy that you cannot use generative AI to do substantive course work and a student does a Google search, are they supposed to avert their eyes?” he says. “That’s generative AI at the top of the page.”

The University of California at Berkeley School of Law’s policy specifically states that generative AI may not be used for anything that would be considered plagiarism if humans were involved and “may not be used for any purpose in any exam situation.”

Certain forms of AI use are honor code violations, says Chris Hoofnagle, a professor and the faculty director of the school’s Berkeley Center for Law & Technology.

“If a student uses generative AI to summarize their outline, then the system introduces fake cases and the cases make it onto an exam, we consider that using generative AI,” Hoofnagle says.

Policies are being tweaked as AI evolves, says Paul Rose, the dean at the Case Western Reserve University School of Law in Ohio.

“We are trying to be as explicit as possible about when AI can be used and when it can’t,” Rose says.

Even if it’s found that the student used it incorrectly, the potential for disciplinary action reported to the state bar remains, he adds.

The Stetson University College of Law’s faculty also are having ongoing conversations on AI use, Barros says.

“Things are changing so rapidly,” he says. “We don’t want to be setting the equivalent of a perjury trap for our students, right? We want to be transparent and communicate well with our students.”

Linna also notes that tools currently available to detect AI use are not reliable—with some biased toward non-native English speakers, for instance.

“We don’t have a good means of policing this,” Linna says. “What if someone is wrongly accused and or maybe even makes innocent mistakes? This should really force law schools to reconsider what we’re trying to accomplish with these policies and whether we’re doing more harm than good.”

0203REBELS2022_KellyeTesty_horizontal_600px Kellye Testy of the Association of American Law Schools: “Nobody wants a student to be tripped up at the bar over something that they inadvertently did. It’s traumatizing,” Testy says. “AI is integrated, and it’s everywhere. We are better off to empower people.” (Photo by Yosef Kalinko)

Along with clear AI policies, says Kellye Testy, the executive director and CEO of the Association of American Law Schools, the solution includes solid ethical training for law students to use AI before entering the workplace, where comfort with the tool will be expected.

“I’ve seen a real seismic shift in the last year in terms of the schools really getting in there and understanding that they need to clarify things for their students and see AI as a tool they need to understand and use properly,” says Testy, a 2022 Journal Legal Rebel.

Law school pedagogy must include critical thinking, administrators note, along with clear communications between students and professors.

“Nobody wants a student to be tripped up at the bar over something that they inadvertently did. It’s traumatizing,” Testy says. “AI is integrated, and it’s everywhere. We are better off to empower people.”