Artificial intelligence reflects the people in the room

Nicole Black.
We like to imagine technology as neutral, but the reality is that artificial intelligence is developed behind doors that most of us will never enter. I was rudely reminded of this concept last year, when a generative AI prompt making the rounds caught my attention: “Based on what you know about me, draw a picture of what you think my current life looks like.”
AI output can be biased
My curiosity piqued; I gave it a try. The result was an image of a casually dressed white woman working in a cozy home office, surrounded by domestic calm and bliss. She was holding a glass of wine while sitting at a small desk with a laptop and stacks of books and a puppy at her feet.
The serene scene struck me as somewhat odd, given how often I had discussed legal technology issues with that ChatGPT account. I had, however, also asked about other topics on occasion, including puppy training and my interest in wine. So perhaps it was a fair representation.
However, I decided to put that assumption to the test, since I’d previously experienced biased AI output. I wondered whether my work account, which used ChatGPT Enterprise, would produce noticeably different results. I’d used it solely for work-related purposes, so the questions I asked typically centered on legal technology.
I entered the same query into my work account. It responded with an image of a white man in a suit seated at a large desk, surrounded by multiple computer monitors, data dashboards and AI analytics.
Even though the same person had used the tools, evincing the same overlap in technology interests, the outcomes were completely different. The underlying algorithmic assumptions had resulted in glaringly different images—and genders.
Acccording to Nicole Black, the images above were generated with ChatGPT with the prompt: “Based on what you know about me, draw a picture of what you think my current life looks like.” The result from her personal account was an image of a casually dressed white woman working in a cozy home office, and the result from her work account was an image of a white man in a suit seated at a large desk.
Diverse perspectives matter
This wasn’t the first time I’d encountered a biased AI output, and it wouldn’t be the last. AI systems necessarily reflect both the perspectives of the people who design and develop them and the data upon which they are trained. Underlying assumptions determine which problems are prioritized, which harms are treated as edge cases and which perspectives are deemed relevant. For decades, the rooms where those assumptions were debated and defined have been dominated by white men.
Historically, women and marginalized people have been underrepresented during the initial development stages and have rarely been involved in determining AI problem statements, training data choices, risk frameworks and deployment standards. That in turn impacts the quality and accuracy of the final output.
The result is AI systems that reduce the range of viewpoints and lived experiences, negatively impacting end users—the majority of whom were given no voice in the room where the tools were created.
Women + AI Summit flips the script
This became very clear to me when I recently attended The Women + AI Summit at Vanderbilt Law School, an event “designed to inform, empower and connect women across disciplines and industries who seek to shape the present and future of artificial intelligence,” according to its website. Topics covered included AI literacy and leadership, ethical- and justice-centered innovation, data privacy and health care agency, community building, and practical frameworks for responsible AI adoption in the legal profession and beyond.
Women-led conferences like this one are all the more important in 2026, when diversity efforts like the Mansfield Certification program are under attack by the current administration.
This program was launched in 2017 and is a process adopted across hundreds of law firms, designed to encourage diversity in law firm leadership. The need for diverse perspectives in law firms and in AI development is critical, especially as opportunities to correct structural inequities across institutions are closing.
This conference, attended by women and their allies, offered a glimpse into what happens when multiple perspectives are prioritized. When women are not sidelined or competing to be heard, the diversity of thought and experience improves the overall outcome for everyone involved.
At the Women + AI Summit, women controlled the conversation, and the discussions moved quickly and without the friction that often comes when competing for authority or a voice. New ideas surfaced, different ways of viewing a problem were considered, and everyone had a voice.
The difference was not cosmetic. Instead, it offered a glimpse of how more diverse perspectives could change the course of AI development, the outputs of AI tools and the outcomes achieved with AI. That dynamic is a model for how AI development could be approached differently moving forward.
We need more voices in the room where AI happens
Biases built into AI don’t remain contained. They scale over time and multiply, affecting decisions and opportunities in ways that can be difficult to reverse. That’s exactly why varied perspectives must be included in AI discussions now, while frameworks and norms are still being established.
The genie may already be out of the bottle, but we still have influence over what happens next. The people in the rooms where AI is developed will determine what these systems become. If we want better outcomes, we need more voices in those rooms.
Nicole Black is a Rochester, New York-based attorney, author and journalist. She is the principal legal insight strategist at 8am, parent company of LawPay, MyCase, CasePeer and DocketWise. She is the nationally recognized author of Cloud Computing for Lawyers and is a co-author of Social Media for Lawyers: The Next Frontier, both published by the American Bar Association. She writes regular columns for ABAJournal.com and Above the Law, has authored hundreds of articles for other publications, and she regularly speaks at conferences regarding the intersection of law and emerging technologies. Follow her on LinkedIn, or she can be reached at [email protected].
This column reflects the opinions of the author and not necessarily the views of the ABA Journal—or the American Bar Association.


