By Paul Lippe
HP defines diversity as “limited to race/ethnicity, gender, LGBT status, and disability status.”
I want to suggest a perhaps radical approach that I believe could do more to advance diversity as well as the profession generally:
Be much more transparent about what we really mean by “performance” for lawyers.
There is widespread agreement on five reasons to pursue diversity (and maybe less widespread agreement on many others):
To some extent, Nos. 2-5 could be summarized as “broadening participation to include historically underrepresented groups will improve performance in a range of ways that you may not appreciate,” and so are appeals to self-interest, not justice.
No. 1 could also fall into that “self-interest, not justice,” category, assuming we were to take a long-term view of what is in the overall interests of the country.
As Roosevelt Thomas, who laid out the basic rationale for diversity in a 1990 Harvard Business Revew article wrote: “So long as racial and gender equality is something we grant to minorities and women, there will be no racial and gender equality. What we must do is create an environment where no one is advantaged or disadvantaged, an environment where ‘we’ is everyone.”
So the natural next step would seem to be would be to articulate that model of performance—diversity is one of a number of reasons law urgently needs to improve how it measures performance. Otherwise, we will always measure performance in ways that reinforce and maintain the status quo, which may be why law lags in diversity.
To illustrate some of the opportunities and challenges in measuring performance, let’s look at two very different areas of competition and performance—Words with Friends and U.S. Supreme Court clerks.
For those of you who don’t know it, WWF is an online Scrabble game typically played on a smartphone. For my money, it is as close to a system of perfect competition and transparency as exists.
Like any game of chance, there is random distribution of letters, so in any given game, the better player doesn’t necessarily win. But over time, the better player will win more often than not.
Leaving aside how the game is played (again, think Scrabble), in our current world of “big data,” I can always go to a screen that shows my long-term performance as well as my performance versus any particular competitor:
So if I want to see my performance versus Larry, WWF will show
• My lifetime wins; his lifetime wins.
• My highest score; his highest score.
• My average score; his average score.
• My average score per word; his average score per word.
But even more usefully, WWF will show me all the elements of our relative competitive performance:
• My win/loss versus Larry.
• Average word score and average game score for each of us.
• The total historic number of words played for each of us.
So if Larry were consistently beating me, I would have to accept that overall he is better than me at WWF. But at the same time, the data provides me with a road map to improve my performance and beat him, e.g., I could learn every word that he plays that I don’t now play.
So to my mind, WWF is as close as one can get to a world of perfect competition and transparency, and is as fair as anything could be. Past performance is predictive of future performance, but in fully transparent way. On the other hand, the rules of WWF are highly constrained (no looking up words, no making up words), so it is probably not possible to innovate within the rules.
Like most games, the fun and challenge comes from operating within somewhat fixed constraints. And there is no room for explicit or implicit bias.
Let’s compare WWF to the game of being a lawyer, and specifically let’s narrow the competition to the 30 or so law school grads every year who end up getting selected to be U.S. Supreme Court clerks compared to the other 470 “top” law schools’ grads who are very similar but don’t have that clerkship experience.
My focus is not on the process of being selected as a clerk, but on the subsequent competition between those 30 clerks and the say 470 near equivalent law students.
We can reasonably say that the 30 clerks may well outperform the 470 nonclerks in their long-term careers for at least three reasons:
• The qualities that caused them to be selected—strong academic track record, good work habits and writing skills, most likely good networking and interview skills—are likely to be predictive of success as a lawyer.
• The experience of being a clerk—the learning, networking and savoir faire—will add to their capabilities.
• The brand of having been a clerk means that throughout their careers, whenever there is an opportunity and a choice between one of the 30 and one of the 470, whenever there is a relative assessment, the 30 clerks will almost always be preferred over the 470 nonclerks, and therefore compound their head start.
Again the first two are hard to argue with, but the third is another example of how nontransparency holds law back. While past performance is somewhat predictive of future performance, I can’t believe that performance at age 27 is so determinative that the clerks would almost always outperform the nonclerks. And frankly to always be preferred is limiting to the clerks as well, as there is a real danger that their learning decelerates.
To state the obvious, we could make comparison between the top 500 lawyers and the next 500 who perhaps went to slightly less elite schools or got slightly less outstanding grades, and so on down the line for every tranche of 500. Law’s system of performance is based on historic metrics (what Daniel Kahnemann refers to as “heuristics”) that largely reflect academic performance; once sorted, it’s very hard for any lawyer to outperform expectations. So we’re not really measuring current performance, we’re measuring our perception of performance based on what someone did on the past or what they look like, limiting transparency, improvement and diversity.
So how can we better measure lawyer performance, recognizing that we are a long way from WWF? I gave a talk on that topic last month at Santa Clara Law School. I would say there are four rules:
• Metrics must be outcomes-based, aligned with clients’ metrics.
• Metrics must be process-specific, i.e. the metrics for doing an M&A transaction are very different from the metrics for a temporary restraining order.
• It’s better to imperfectly measure important things than perfectly measure unimportant things.
• Metrics work much better when considered with ideas about how to improve performance (e.g., Design Thinking) , because in general it’s not super-meaningful to measure something unless we have some notion of comparison or improvement.
If we make performance more transparent I think we can:
• Reduce bias.
• Provide a road map to improve performance, overcoming historic disadvantages.
• Accelerate innovation.
• Demonstrate more effective performance by individuals and teams.
• Improve satisfaction of lawyers and society’s satisfaction with lawyers.
Paul Lippe, the former CEO of Legal OnRamp, is a member of Elevate Services’ Advisory Board.