Why state bars are struggling to keep pace with AI in legal practice

Joe Stephens, a consulting attorney and a legal AI expert at Steno, a legal technology company that provides tech-enabled court reporting, litigation support and remote deposition services.
Walk into any law firm today, and you’ll likely find attorneys using artificial intelligence tools for everything from contract analysis to brief writing. Yet if you walk into most state bar associations, you’ll find regulatory frameworks that haven’t caught up to this new reality.
Despite AI becoming as common in legal practice as email and word processing, many state bar associations have yet to issue comprehensive guidance on its ethical use. Among those that have, the approaches are so varied that an attorney practicing across state lines might face conflicting standards—or no standards at all.
This regulatory vacuum has appeared at a critical moment. The legal profession is undergoing its most significant technological transformation since the advent of electronic filing, yet the rules governing this transformation remain largely unwritten.
Patchwork of approaches
The ABA attempted to provide national direction through Formal Opinion 512, applying existing professional conduct rules to generative AI use. The opinion establishes that lawyers must understand AI’s capabilities and limitations, protect client data, supervise AI outputs and bill ethically for AI-assisted work.
Yet this raises a fundamental question: Is a broad application of existing rules sufficient for technology that can draft contracts, predict case outcomes and analyze thousands of documents in minutes?
Some state bars think not. The New York State Bar Association’s Task Force on Artificial Intelligence released comprehensive recommendations in April 2024 that addressed issues like client notification of AI use and data privacy concerns. California issued practical guidance in November 2023 recommending that attorneys consult with information technology professionals to ensure AI systems adhere to stringent security and data retention protocols. Texas took yet another approach, with its Taskforce for Responsible AI in the Law recommending mandatory technology-related CLE requirements that include AI competency.
Meanwhile, in states without specific guidance, attorneys are navigating this new landscape using decades-old rules written for a predigital age. It’s like trying to regulate air traffic using rules written for horse-drawn carriages.
Beyond hallucinations
The media loves stories about lawyers citing fake cases generated by ChatGPT. The Mata v. Avianca sanctions and the United States v. Cohen reprimand make for compelling cautionary tales. But focusing on these spectacular failures misses the forest for the trees—namely, that these hallucination incidents represent basic competency failures that any first-year associate should avoid. They’re the equivalent of citing a case without “Shepardizing” it, which is embarrassing, sanctionable, but hardly the most profound challenge AI poses to legal practice. The real ethical minefields lie elsewhere:
- • The “black box” problem. Most AI systems operate as “black boxes”—meaning they produce outputs without explaining their reasoning. When an AI tool recommends a litigation strategy or identifies key documents in discovery, can attorneys fulfill their duty of competent representation without understanding how the AI reached its conclusions? This opacity becomes particularly troubling in criminal defense, where an attorney might need to explain why they pursued one strategy over another.
- • A data security time bomb. Every query to an AI system potentially exposes client information. But unlike traditional software that processes data locally, many AI tools send information to remote servers for processing. Some systems retain this data for model training. Others may store it indefinitely. When a lawyer uploads a confidential merger agreement to an AI tool for analysis, where does that data go? Who has access? For how long? Most attorneys can’t answer these questions, but they are fundamental to maintaining client confidentiality.
- • The bias amplification effect. AI systems trained on historical legal data inevitably learn historical biases. If an AI tool is trained on decades of caselaw from an era of systemic discrimination, it may perpetuate those biases in its recommendations. When an AI suggests harsher plea deals for certain defendants or consistently favors certain types of litigants, it’s not being malicious, it’s reflecting the patterns in its training data. But attorneys using these tools may unknowingly become instruments of algorithmic discrimination.
- • The competency paradox. The ABA Model Rules of Professional Conduct say lawyers must competently use technology, including AI. But what does competence mean when technology evolves monthly? Today’s best practices might be obsolete by next quarter. An attorney who became “AI competent” in 2023 might be dangerously behind in 2025. Unlike traditional legal knowledge that evolves gradually, AI competency requires constant updating—which is a challenge for busy practitioners and an even bigger challenge for regulators trying to define minimum standards. Do we need to test competence levels regularly, like we do for pilots?
- • The economic justice gap. AI tools promise to democratize legal services by making them more efficient and affordable. But the best AI tools are expensive, creating a new divide between firms that can afford cutting-edge technology and those that cannot. If AI becomes essential for competent representation, what happens to solo practitioners and legal aid organizations that can’t afford these tools? The profession risks creating a two-tier system where quality representation depends on technological access.
The federal vacuum
Compounding these challenges is the absence of federal oversight. While the European Union has passed comprehensive AI legislation and other countries are developing national frameworks, the U.S. approach remains fragmented. This leaves state bars as the de facto regulators of AI in legal practice—a role they’re neither equipped for nor eager to assume.
State bars excel at addressing traditional ethical issues: conflicts of interest, client funds, advertising standards. But regulating rapidly evolving technology requires different expertise and resources. It’s asking organizations designed to oversee professional conduct to become technology regulators overnight.
Charting a path forward
The solution isn’t to halt AI adoption; that ship has sailed. Nor is it to impose rigid rules that will be obsolete before the ink dries. Instead, the legal profession needs adaptive frameworks that can evolve with technology. This might include:
- • Dynamic standards. Rather than static rules, create principles-based guidelines that can accommodate technological change.
- • Safe harbors. Establish clear practices that, if followed, provide protection from disciplinary action.
- • Collaborative governance. Bring together bar associations, tech companies and practitioners to develop standards collectively.
- • Tiered requirements. Different rules for different practice areas and firm sizes, recognizing that a solo practitioner’s AI needs differ from those of a global firm.
- • Continuous education and testing. Mandatory, regularly updated AI training that goes beyond one-time CLE credits and regular tests.
The legal profession stands at an inflection point. We can either proactively shape how AI transforms legal practice or reactively deal with the consequences of unregulated adoption. But maintaining the status quo—where many attorneys practice without clear AI guidelines—is not sustainable.
The question facing state bars isn’t whether to regulate AI but whether they can develop frameworks nimble enough to govern technology that evolves faster than traditional regulatory processes allow. The answer will shape not just how lawyers practice but whether the profession can maintain its commitment to competent, ethical representation in an AI-driven future.
Joe Stephens, a consulting attorney and a legal AI expert at Steno, helps legal teams work more efficiently. Steno is a legal technology company that provides tech-enabled court reporting, litigation support and remote deposition services. He leverages his experience having built a large rural public defender office and worked in the Texas legislature to enhance Steno’s innovative deposition services and legal tech tools.
Mind Your Business is a series of columns written by lawyers, legal professionals and others within the legal industry. The purpose of these columns is to offer practical guidance for attorneys on how to run their practices, provide information about the latest trends in legal technology and how it can help lawyers work more efficiently, and strategies for building a thriving business.
Interested in contributing a column? Send a query to [email protected].
This column reflects the opinions of the author and not necessarily the views of the ABA Journal—or the American Bar Association.


