5 things lawyers should know about artificial intelligence
Brenda Leong and Patrick Hall.
Although artificial intelligence has been the subject of academic research since the 1950s and has been used commercially in some industries for decades, it is still in its infancy across much of the broader economy.
The rapid adoption of this technology, along with the unique privacy, security and liability issues associated with it, has created opportunities for lawyers to help their clients capture its economic value while ensuring its use is ethical and legal.
However, before advising clients on AI issues, lawyers should have some basic technical knowledge to answer questions about legal compliance.
1. AI is probabilistic, complex and dynamic
Machine learning algorithms are incredibly complex, learning billions of rules from datasets and applying those rules to arrive at an output recommendation. Even the most precise and well-designed AI systems are probabilistic in nature, guaranteeing that the system will, at some point, produce an incorrect result.
Additionally, most systems are trained using data from a snapshot in time, so when events in the world shift away from the patterns in the data (as in the case of the COVID-19 pandemic), the system is likely to be wrong more frequently, requiring more legal and technical attention.
However, there are existing regulatory frameworks to address this type of risk management in pre-AI contexts. The Federal Reserve’s model risk management guidance (SR 11-7) lays out processes and controls that work as a starting point to handle the probabilistic and dynamic characteristics of AI systems. Lawyers in-house and at firms who find themselves needing to consider AI-based systems would do well to understand best practices and generalize the guidance offered by the Federal Reserve for model risk management.
2. Make transparency an actionable priority
The complexity of AI systems makes ensuring transparency difficult, but organizations deploying AI can be held liable if they are not able to provide certain information about their decision-making process.
Lawyers would benefit from familiarizing themselves with frameworks such as the Equal Credit Opportunity Act and the Fair Credit Reporting Act, which require that customers receive “adverse action notices” when automated decision systems do not benefit them. These laws set an example for the content and timing of notifications relating to AI decisions that could negatively affect customers and establish the terms of an appeals process against those decisions.
One of the best ways to promote transparency in AI is to establish internal policies that implement best practices in the documentation of AI systems. Standardized documentation of AI systems, with an emphasis on organizational development, measurement, and testing processes, is crucial to enable ongoing and effective governance of AI systems. Attorneys can help by creating templates for such documentation—taking into account any external compliance requirements, based on applicable legislation—and by assuring that documented technology and development processes are consistent and complete.
3. Bias is a major problem—but not the only problem
AI systems learn by analyzing billions of data points collected from the real world. This data can be numeric, such as loan amounts or customer retention rates; categorical, like gender and educational level; or image-based, such as photos and videos. Because most systems are trained with the data generated by existing human systems, the biases that permeate our culture also permeate the data.
There can be no unbiased AI system. If an organization is designing or using AI systems to make decisions that could potentially be discriminatory under law, attorneys should be involved in the development process alongside data scientists.
But as real and important as these concerns are, the extensive focus on bias may result in overlooking other equally important types of risk. Data privacy, information security, product liability, and third-party sharing, as well as the performance and transparency problems already mentioned, are just as critical. Many organizations are operating AI systems without sufficiently addressing each of these additional issues. Look for bias problems first, but don’t get outflanked by privacy and security concerns or an unscrupulous third-party partner.
4. There is more to AI system performance than accuracy
While the quality and worth of an AI system has largely come to be determined by its accuracy, that alone is not enough to fully measure the broad range of risks associated with the technology. The current conception of accuracy is often limited to outputs in lab and test settings only, which does not always translate into real-world results. But even then, overly focusing only on accuracy likely ignores a system’s transparency, fairness, privacy and security. Each of these factors is equally as important in the AI system’s impact—whether it feeds other systems or connects directly to customers.
Attorneys and data scientists need to work together to create more robust ways of verifying AI performance that focuses on the full spectrum of real-world performance and potential harms, whether from security threats or privacy shortfalls. While AI performance and legality will not always be the same, both professions can revise current thinking to imagine measurements beyond high scores for accuracy on benchmark datasets.
5. The hard work Is just beginning
Most organizations utilizing AI technology likely need documentation templates, policies that govern the development and use of the technology, and guidance to ensure AI systems comply with regulations.
Some researchers, practitioners, journalists, activists and attorneys have started this work of mitigating the risks and liabilities posed by today’s AI systems. Businesses are beginning to define and implement AI principles and make serious attempts at diversity and inclusion for tech teams. Laws like ECOA, GDPR, CPRA, the proposed EU AI regulation, and others are forming a legal foundation for regulating AI, even as other fledgling risk mitigation frameworks falter, and regulatory agencies continue to over-rely on general antitrust and unfair and deceptive practice standards. As more organizations begin to entrust AI with high-stakes decisions, there is a reckoning on the horizon.
Brenda Leong is senior counsel and director of artificial intelligence and ethics at the Future of Privacy Forum. She oversees development of privacy analysis of AI and machine learning technologies, writes educational resource materials for privacy professionals around AI and ethics, and manages the FPF portfolio on biometrics and digital identity, particularly facial recognition, and facial analysis. She works on industry standards, governance guidance and collaboration on privacy and responsible data management by partnering with stakeholders and advocates to reach practical solutions for consumer and commercial data uses. Prior to working at FPF, Leong served in the U.S. Air Force, including policy and legislative affairs work from the Pentagon and the U.S. Department of State.
Patrick Hall is principal scientist at bnh.ai, a boutique law firm focused on AI and analytics. Hall also is a visiting professor in the Department of Decision Sciences at the George Washington University. He is a frequent writer, speaker and adviser on the responsible and transparent use of AI and machine learning technologies. Before co-founding BNH, Hall led H2O.ai’s efforts in responsible AI, resulting in one of the world’s first widely deployed commercial solutions for explainable and fair machine learning. He also held global customer-facing roles and R&D research roles at SAS Institute.
Mind Your Business is a series of columns written by lawyers, legal professionals and others within the legal industry. The purpose of these columns is to offer practical guidance for attorneys on how to run their practices, provide information about the latest trends in legal technology and how it can help lawyers work more efficiently, and strategies for building a thriving business.
Interested in contributing a column? Send a query to [email protected].
This column reflects the opinions of the author and not necessarily the views of the ABA Journal—or the American Bar Association.