Lawyers have to level up skills in the age of generative AI
Back when using technology meant using tools to correct grammar and spelling or automate documents, lawyers didn’t need much training.
But that’s no longer the case—not with the emergence of generative artificial intelligence, a helpful but tricky tool that has the potential to serve up information that sounds plausible but might not be accurate.
According to experts at speaking “Defining Technology Competency in the Age of Generative AI” on April 4 in Chicago at ABA Techshow, generative AI can help lawyers be more productive and efficient, but it will require them to be more vigilant and cognizant of their ethical duties.
“If you’re thinking that your experience from earlier tools will transfer, you are going to be sorely mistaken,” said Ivy B. Grey, chief strategy and growth officer for WordRake, an editing software company.
Attorneys have a general duty to use tech ethically. ABA Model Rule 1.1 focuses on competence.
To be competent, an attorney has a duty of being aware of tech, understanding the risks and benefits, keeping abreast of changes and developing skills to become competent, said Kenton Brice, director of the law library and associate professor at University of Oklahoma College of Law.
“We need to proactively evaluate these tools and how they work before you use them,” he added.
Competency isn’t binary, Brice said. People start out with unconscious incompetency—not knowing what they don’t know. After receiving some training, they become consciously incompeten—knowing they know enough to be dangerous, he added.
“Don’t stay there,” he said. “Level up.”
Ultimately, lawyers must become consciously competent or fluent, Brice said, as mandated by the ABA’s Formal Opinion 512, which offers ethical guidance on generative AI use.
Follow along with the ABA Journal’s coverage of the ABA Techshow 2025 here.
One step at a time
To gain competence, Grey offered a to-do list.
First, pay attention to progress. As tech evolves at a breakneck speed, attorneys must know what is different in new iterations and if it’s what needed to best serve clients, Grey said.

Next, know the risks and benefits of the type of AI you’re using that’s specific to your practice, she added. Litigators should know how to competently use e-discovery tools, for instance.
Lawyers must use the tools skillfully, she added. Know what you are wanting to accomplish and why this tool is the right one to get the job done, Grey said.
Always be upfront with clients on your generative AI use, getting consent to disclose information and experiment with new technology. Realize AI tools can be interconnected to others that could expose a clients’ proprietary information, she added.
Also, use professional judgment and a critical eye with advice that stems from AI use. Now that generative AI can write an entire brief, lawyers might be tempted to lean too much on it and submit documents to the court without checking them first.
“Just because you do something, doesn’t mean you should,” Grey said.
She suggested a “sandwich” style workflow with humans doing the work at the beginning and end of the process and generative AI handling the work in the middle.
First, attorneys should find secondary sources to learn as much as possible, she said, then craft thoughtful, informed prompts.
Next, have generative AI create and outline, but keep going back to carefully fill it out sections with a series of specific prompts, “coaching it along the way,” she said.
Finally, humans will edit output and fact-check to create end product that includes their own insight and thinking.
While there are many considerations when using generative AI, lawyers must not be intimidated by the potential pitfalls of the tech.
Write a letter to the editor, share a story tip or update, or report an error.