ABA House adopts 3 guidelines to improve use of artificial intelligence
The House of Delegates adopted a resolution at the 2023 ABA Midyear Meeting on Monday that addresses how attorneys, regulators and other stakeholders should assess issues of accountability, transparency and traceability in artificial intelligence.
Resolution 604 calls on organizations that design, develop, deploy and use AI to follow these guidelines:
- Developers of AI should ensure their products, services, systems and capabilities are subject to human authority, oversight and control.
- Organizations should be accountable for consequences related to their use of AI, including any legally cognizable injury or harm caused by their actions, unless they have taken reasonable steps to prevent harm or injury.
- Developers should ensure the transparency and traceability of their AI and protect related intellectual property by documenting key decisions made regarding the design and risk of data sets, procedures and outcomes underlying their AI.
The Cybersecurity Legal Task Force, which submitted the resolution, also urges Congress, federal executive agencies and state legislatures and regulators to adhere to these guidelines in laws and standards associated with AI.
Lucy Thomson, a founding member of the Cybersecurity Legal Task Force, introduced the measure, saying that following its proposed guidelines “will enhance AI, reduce its inherent risks and facilitate the development and use of AI in a trustworthy and responsible manner.” She added that a broad group of AI experts across the association spent the past year developing the resolution.
Follow along with the ABA Journal’s coverage of the 2023 ABA Midyear Meeting here.
Examples of AI innovations include self-driving cars, diagnostic assistants to hospital clinicians and autonomous self-directed weapons systems, according to the report that accompanies the resolution. The White House Office of Science and Technology Policy is one of the organizations working on AI governance frameworks and released its “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People” in October 2022.
The U.S. Equal Employment Opportunity Commission has joined ongoing efforts to require accountability and prevent discrimination as a result of AI. In 2021, the EEOC launched an initiative to ensure AI used in employment decisions complies with federal civil rights laws. According to the resolution’s report, cities, such as New York City, and states, such as California, have also adopted measures that aim to prevent AI from violating anti-discrimination and privacy laws.
Resolution 604 additionally focuses on transparency, because as its report says, people should know when they are engaging with AI and be able to challenge its outcomes when appropriate. This includes people who are denied jobs, refused loans or prevented from obtaining benefits because of decisions made by AI.
Similarly, traceability is a key element of “trustworthy AI,” the report adds. In the event of undesirable performances or outcomes, it helps developers understand what went wrong and determine how to prevent more issues in the future.
“It is not appropriate to shift legal responsibility to a computer or to an algorithm rather than to responsible individuals and other legal entities,” said Thomson, who is also a past chair of the Science & Technology Law Section. “By specifying the essential information that must be included in the design, development and use of AI to ensure transparency and traceability, this resolution will help to ensure that participants in the legal process and in the courts have the capacity to evaluate and resolve legal questions and disputes.”
Past ABA President Laurel Bellows also spoke in favor of Resolution 604. She urged House members to stay up to date on issues related to AI, contending that it is part of their responsibility as a lawyer.
“This is the example, the prime example, of why this House has an important impact on the world, on each citizen of the world and certainly each citizen of the United States,” Bellows said.
The Antitrust Law Section, Tort Trial and Insurance Practice Section, Science & Technology Law Section and Standing Committee on Law and National Security co-sponsored Resolution 604.
The House of Delegates has considered two other measures relating to AI.
In 2019, the association’s policymaking body passed Resolution 112, which urged lawyers and courts to address ethical and legal issues arising from the use of AI in the practice of law.
Last February, it adopted Resolution 700, which called on governmental entities to refrain from using pretrial risk-assessment tools unless “the data supporting the risk assessment is transparent, publicly disclosed and validated to demonstrate the absence of conscious or unconscious racial, ethnic or other demographic, geographic or socioeconomic bias.”
ABAJournal.com: “Should lawyers embrace or fear ChatGPT?”
ABAJournal.com: “Even with AI certification initiatives, lawyers need more schooling on tech”