Artificial Intelligence & Robotics

Profs trade notes as law schools write generative AI policies

  •  
  •  
  •  
  •  
  • Print.

artificial intelligence

“There’s a lot of interest,” says Daniel W. Linna Jr., a senior lecturer and the director of law and technology initiatives at the Northwestern University Pritzker School of Law. “They’re talking about what the policy is. This is constantly changing technology, and we need to keep up to date.” Image from Shutterstock.

Updated: With more generative artificial intelligence platforms becoming widely available, law schools are adjusting academic integrity polices, while professors are exchanging notes on how best to integrate the emerging technology in the classroom.

Starting in the spring semester this year, second- and third-year law students will have access to Lexis+ AI, a generative AI platform, a little more than a year after OpenAI launched its free ChatGPT chatbot.

As these platforms become available and evolve, law schools are reacting with initial policies that allow professors to adjust the rules to suit the pedagogical needs of their classes.

The policy at the University of California at Berkeley School of Law allows generative AI to assist in some tasks, such as using it as a search engine or to correct grammar. But it bans its use in exams or in ways that would be considered plagiaristic. Instructors can write up polices that deviate from this rule.

Meanwhile, the policy at the Northwestern University Pritzker School of Law is broad, stating that “students are prohibited from using generative AI to produce, derive or assist in creating any materials or content that is submitted to the instructor.” However, instructors can allow generative AI “to any extent they deem appropriate.”

“If the faculty member doesn’t make it clear what the policy is, then the default policy is that it’s prohibited,” says Daniel W. Linna Jr., a senior lecturer and the director of law and technology initiatives at the Northwestern University Pritzker School of Law.

Linna has taught generative AI in the law school’s coursework for years, and other instructors at the school now are introducing structured activities to learn the benefits and risk of generative AI in classes, he says.

At the Fordham University School of Law, its memorandum on academic integrity demands that students sign a waiver with each test, stating: “By submitting this exam, I certify that I have not consulted, collaborated or shared any information with anyone, nor have I utilized unauthorized materials, including any artificial intelligence or machine learning tools, during this exam.”

Linna finds these types of polices problematic.

“One of the big problems here is definitions—because when you say you can’t use generative AI, what does that mean? It’s built into all these platforms and tools,” he says.

“If you ask a lawyer or a law student to certify that they have not used machine learning or AI, they should refuse to do so,” he says, because they may have used it unintentionally.

The Suffolk University Law School’s policy addresses that. It requires students to certify within two hours of taking take-home exams, papers and projects that they haven’t used AI, except for spelling and grammatical corrections.

Some instructors are writing policies for specific classes. At the University of California College of the Law at San Francisco, Alice Armitage, a professor and the director of applied innovation, which includes overseeing the LexLab, spells out expectations on how to use the tool in the “Generative AI and the Business of Law” course that she will teach this spring. Those include refining prompts, checking all facts found via generative AI with an additional source, and adding a paragraph to assignments explaining how AI was used in that assignment.

To help law school faculty exchange ideas and notes, Linna and April Dawson, the associate dean of technology and innovation and a professor at the North Carolina Central University School of Law, created an AI and law-related course list on a website for instructors.

To date, 142 faculty from 104 law schools from all over the United States, as well as a few schools from other countries, including Canada, the United Kingdom, Australia, India, Kenya and Spain, according to Linna. Linna and Dawson created a Google group to discuss questions about further developing AI in coursework, Linna adds, and there is an email list developed by Stanford University called CyberProf.

“There’s a lot of interest,” Linna says. “They’re talking about what the policy is. This is constantly changing technology, and we need to keep up to date.”

Updated Jan. 2 at 4:05 p.m. to reflect when Alice Armitage’s “Generative AI and the Business of Law” course is being taught and to add more details about Stanford University’s CyberProf email list.

Give us feedback, share a story tip or update, or report an error.