Asked and Answered

How can lawyers fight implicit bias? (podcast with transcript)

  •  
  •  
  •  
  •  
  • Print.

Unbalanced scales of justice.

Many of us don’t think of ourselves as biased, and we don’t want to be prejudiced towards others. But we’re also reluctant to acknowledge the ways bias can creep in, according to academics who study implicit bias.

In this episode of Asked and Answered, the ABA Journal’s Stephanie Francis Ward speaks with Jeffrey Rachlinski, a Cornell Law School professor who has done various studies about implicit bias, including one that focused on trial judges.

Podcast Transcript

Advertisement: This podcast is brought to you by Amicus Attorney, developers of legal practice management software. Let Amicus help you run your practice, so you can focus on what you do best: practice law. Visit AmicusAttorney.com and get started today.

Stephanie Francis Ward: Many of us don’t want to be prejudiced towards others, but we’re also reluctant to acknowledge that perhaps we are, say academics who study implicit bias. I’m Stephanie Francis Ward and on today’s episode of ABA Journal’s Asked and Answered, I’m speaking with Jeffrey Rachlinski, a Cornell law school professor who also has a doctorate in psychology. He’s done various studies about implicit bias, including one that focused on trial judges. Welcome to the show, Professor.

Jeffrey Rachlinski: Thank you. It’s a pleasure to be here.

Stephanie Francis Ward: Tell me about your study of trial judges in terms of what you found.

Jeffrey Rachlinski: Well, along with a number of collaborators, including a federal judge named Andrew Wistrich, and a professor at Vanderbilt named Chris Guthrie, and my colleague Sheri Johnson, the four of us collected data on trial judges making a couple of different decisions that they would normally make in court, but we also measured their implicit bias. We measured the implicit connection they have between African-American faces and positive or negative imagery, and white faces and positive and negative imagery.

What we found was that the judges, like most adults, more closely associate African-American faces with negative concepts and more closely associate white faces with positive concepts. Now, that varied a lot among the judges and indeed we had judges of different races, and among the African-American judges in particular there were a fair number who associated African-American faces with positive imagery. But by and large, most of the white judges harbored that close association between white and good and black and bad.

We also then had them make a couple of different decisions of the sort that trial judges make. So we had them determine guilt or innocence of a criminal defendant in a case involving a battery, an assault, basically involved a fight in a high school locker room that we described to them. And what we did is for half of the judges they read about a white kid who pushed an African-American kid into a bank of lockers and sent him off to the hospital. The other half read about an African-American kid that pushed a white kid into a bank of lockers. And we asked, “Would you find this person guilty?”

Basically, the case involved an incomplete self-defense claim. The perpetrator of this crime is probably guilty of battery but claims self-defense, but doesn’t have a lot of evidence to support it. When this was given to lay adults they showed a difference. When it was an African-American kid that was the defendant, they were 90 percent likely to convict, 90 percent of the folks who read it convicted. When it was a white kid it was only 70 percent. But among the judges it was 80 percent throughout. And that decision didn’t have any correlation with their implicit association. That’s the good news.

Stephanie Francis Ward: But when you say 80 percent, it was 80 percent to convict? Or how did that work exactly?

Jeffrey Rachlinski: I’m sorry; it was 80 percent to convict, yeah.

Stephanie Francis Ward: I see.

Jeffrey Rachlinski: So it’s sort of halfway between what the non-judge adults did, right? So four out of five of the judges said they would convict, but what was important to us was not so much the rate, but the fact that it didn’t vary based on whether the defendant was white or black. But that is different than ordinary adults. And the interesting thing for us is even though we know these judges, in fact, harbor this sort of association between white and good and black and bad, it didn’t translate into their judgment, at least in that context. Again, that’s the good news.

The bad news is we had them also do another task where we didn’t overtly identify the race of the litigants involved, we subtly suggested it using context cues and sort of an odd kind of test that we did on the computer that subliminally primed them to think about African-Americans or whites. It’s a test that’s been used in other contexts, and it leads people to think of African-Americans. We show them words that are more closely associated with African-Americans, proper nouns like Oprah or Harlem, or words like Jheri curl that most Americans would associate with African-Americans. Or we showed them neutral words.

So we sort of primed them to think African-American or white and then we gave them two judgments involving juvenile defendants and asked them to dispose of the defendant. There was no question that these defendants were guilty. One was convicted of shoplifting, the other of armed robbery. The real question was, what disposition? What would you do with the kid?

And there we did find an association between their implicit association between black and good and white and bad and what they did with the kid. Namely, the more closely the judges associated white and good, the harder they were on the kid when we had primed them to think of the kid as African-American. And the opposite was true when we had them think of the kid as white.

So that’s to say, the judges who had the black/good, white/bad association—it’s mostly African-American judges but not exclusively—were harder on the white kid than on the African-American kid. So their implicit associations in fact then predicted their disposition of the juvenile shoplifter. And the same was true of the armed robber. And in fact for the armed robber, the effect was a little stronger. That might be because the crime was violent and indeed there’s a fair amount of data that most white adults associate African-Americans with violence much more easily than whites. So the violent crime, in fact, had a somewhat larger effect.

So what we found in the study then was when we were very explicit about the race of the defendant we didn’t really see any differences. The judges were careful, they were on guard, they treated the defendants the same. But when they weren’t on guard, when they weren’t really being told explicitly this is a white kid or this is a black kid, then we saw their unconscious association sort of creep in and influence their judgment.

Stephanie Francis Ward: I think, perhaps you’ve told me in an earlier interview, with judges, it’s important for them—they don’t want to appear to have bias because that’s, you know, might be in an order or an opinion and that’s not going—they’re going to get dinged perhaps. So when they knew where there was a situation where they might have implicit bias they were more mindful of it. Am I correct in my memory of our earlier conversations?

Jeffrey Rachlinski: Well, yes. I mean, there’s a lot we don’t know about how to interpret the results, to be fair to our data, and what we can say and what we can’t say. The judges might well have been on guard in our study and then when they’re told, “Look, it’s a white defendant or an African-American defendant,” they might have worried about that very much and then been on guard. That said, I actually think the study’s a little bit better than that, because the judges we had weren’t really told what we were studying. They were told they were going to attend a session on judicial decision-making and they were given hypothetical questions.

In fact, most of them didn’t know we were going to collect any data when they entered the room. We asked for permission to do that at the outset. And so they didn’t show up at the conference wanting to be part of research on race in the judicial system, they just showed up at a judicial education conference and we asked them politely to be part of this research. And they were mostly—mostly they wanted to do that. A couple judges didn’t, but most of them did want to participate. So they’re not really quite on guard in the way you might think, in the sense that we told them “we’re studying race” and then they’re thinking about it. No, it was just a decision-making task.

And of course, because the way we study this is half the judges are reading about one defendant and the other half reading about the other, they don’t really know the other half has a different set of stimulus materials. So the judges reading about the white kid don’t know that half of their colleagues have a black kid and vice versa. So I don’t think they’re quite as on guard as—it’s not so much on guard for our study as I think they are generally on guard about race in the criminal justice process, so that when they see a description of an African-American kid or a white kid, that sort of triggers this desire to want to be race neutral.

Judges, we believe, are deeply committed to egalitarian norms and don’t want to be biased. And that’s not just that they don’t want to appear to be biased, I think they really don’t want to be biased. So in fact when you signal that the race of a defendant in a criminal context, a context in which judges are very concerned about the outcomes and the disparities that they see between white and black kids, then that triggers this desire to not rely on their intuition, not rely on their associations, to think carefully about the problem. And it produced a pretty good result for us.

And we’ve found that in other contexts, too. We’ve done a number of studies involving sentencing decisions for judges and we really just don’t see racial disparities. Now bear in mind the way we’re studying it. The judges have all identical material except that we vary the race of the defendant. The real world doesn’t look like that of course. The real world has a lot of varied material.

Stephanie Francis Ward: Why do you think that is? So your study, it sounds like, showed that it was pretty across the board regardless of race, but in terms of—as you said, the real world doesn’t look like that.

Jeffrey Rachlinski: Right. And that’s the difference. Well, I think there are a couple of things that are different in the world. First, judges in the real world operate in a hurry a lot of times. They’re under pressure, particularly you see bail decisions. There’s a fair amount of data that bail decisions are quite disparate by race. Judges there have to process cases very, very quickly and so they may not be as on guard as they are in our study where they’ve just got one case before them and they’re asked to think about it very carefully.

And in other contexts in sentencing decisions they don’t get the same information. We have to worry about bias not just of the judge, but of everyone in the system. The criminal sentence is the product of a series of decisions by police officers, by probation officers, by pre-sentencing report officers, by prosecutors and the like.

So I think by the time it gets to the judge a lot of parties have had an opportunity—well, not quite the way to phrase it, but have had this opportunity to rely on their implicit judgment. And some of them, again, are in a hurry. They’ve got a lot of cases before them, they’ve got a lot of pressure, and so they make the kind of decisions that are more vulnerable to bias when presenting the facts or describing them or making recommendations to the judge.

If you control for all that and you tell the judge the race of the litigant explicitly, then what we’re seeing is that’s the moment where we don’t see racial disparities. It’s not that judges don’t have the implicit associations; it’s not even that they won’t rely on them. We showed that as well when they’re not being careful, that they will rely on these implicit judgments. But when they’re careful, when they’re slow, when they’re thoughtful, when everything else is controlled, that’s when we see the magic that they don’t rely on their implicit biases.

Stephanie Francis Ward: Do you think the legal profession—and that would be not just judges but lawyers as well—overall do you think the legal profession tends to be more or less open to the idea that they have implicit bias?

Jeffrey Rachlinski: I think more. I mean, there’s a lot of variation of course, both in judges and in, of course, lawyers and prosecutors and defense attorneys. I mean, there are a lot of people involved in the legal system, and they’re of all kinds of different mindsets, but I think generally speaking they’re more interested in implicit bias than non-lawyers. Because most of them, to the extent they’re officers of the court, have in fact sworn oaths to treat everyone the same. Judges—both swear that they will treat everyone the same and they’re subject to ethical discipline if they don’t. Prosecutors do the same, so are defense attorneys, and all lawyers are basically swearing an oath to uphold justice, to treat everyone equally before the law.

That’s important to us. We’re all professionals and we mean not to treat people differently based on race. As I said, there are differences and of course if you look hard you can find evidence of people being explicitly biased. You can unfortunately find it among some judges as well, but I think that’s the exception.

Most judges, most lawyers I think, have this oath and they believe it. They don’t—you don’t become a lawyer because you’re not interested in justice, and part of justice is equal treatment before the law, and it’s something that motivates most of us. It’s a core aspect of our career. So I think it is more—something that lawyers worry about more than non-lawyers.

Stephanie Francis Ward: So we’ve seen a fair amount of stories this year about women who are in private practice at large law firms who’ve brought the gender discrimination cases. And it comes back to compensation, and I’ve heard a fair amount of complaints from people of color as well. And it’s certainly not to say that most lawyers are in BigLaw, because they’re not. But do you think that perhaps when it comes to employment or perhaps someone’s money, maybe implicit bias is clouding some people’s views?

Jeffrey Rachlinski: Well, you know what’s interesting is in the literature on intuitive judgment and biases in general, this just has a long history in psychology now, and one of the first reactions economists had to research psychologists produced that showed that people rely on their intuition too much and make judgments that they don’t want to make, one of the reactions of the economists was, “Well, you’re not paying your subjects.” The research participants aren’t—they say “There’s no incentives to get it right, and if you paid them they’d get it right.”

And so then over time psychologists began paying their subjects in some cases to get it right. Not often, because actually we don’t believe that’s true, but to satisfy the economists, go ahead, we’ll pay our subjects for the right answer. And for a great many kinds—not all—but for a great many kinds of intuitive judgments, people rely more on their intuition when money’s at stake, because intuitive judgments seem right.

When you’re relying on your first impression, your first instinct, you’re more confident in your judgments than if you stop and think and deliberate. There’s a fair amount of data on that, that stopping and thinking and doing the math—and for a judge doing all the checklists for criminal sentencing and the like—it actually makes you less confident in your judgment even if it also makes you more accurate. So when money’s at stake, oddly enough, people are more likely to be wrong for some things. That’s not true of everything of course. That would be an overstatement, of course, to say that and when money’s at stake there are some things that kick in that are different.

The other thing of course, when you see the data on compensation in law firms, you do see big disparities of course. There are a lot of structural reasons for that as well that have been imposed on the system so that the partnership track for law firms in fact isn’t—even today, isn’t nearly so conducive to stay-at-home parenting at any point in one’s career, which is something female lawyers are more likely to do than males.

You know, so something we did a few years ago, just sort of was really striking with judges, it was about ten years ago. We had a problem where we thought it might matter to the judges whether they had kids or not. It involved some college students and we thought a lot of them have—if they have kids would be college age. So we asked them about—a little bit about their families. The male judges had an average of 2.2 children in our sample, this was a group of state judges, and the female judges—well, actually the modal number of children was zero. More than half of them had no children.

Of the judges who had children the modal number was one and only a handful of judges had more than one kid when they were female judges. Now to get to be a judge, what you had to do, you had to go to law school; you had to have a successful legal career for maybe 15, 20, 25 years; and then after the mechanisms by which you get appointed or elected to be a judge, so that’s a successful lawyer, the women judges were clearly different than the male judges in this regard. It was a huge difference in terms of what their lives clearly looked like.

And so that’s still true in BigLaw firms as well. It’s a big difference between men and women in terms of how likely they are to stay home with kids. That creates a structural difference and it’s what psychologists sometimes call sort of structural discrimination. That your judgment about what a lawyer looks like, a high-powered partner, really affects then how you structure the track to that. And if a high-powered partner never stays home with kids ever, then that’s what that track looks like. And those sorts of things matter a lot.

And now you also of course see huge disparities based on race and that continues to be very troubling. Of course the ABA did a wonderful study a couple years ago, just two years ago, where they had partners at law firms evaluate briefs written by associates. They weren’t real, there was one brief and it was all the same. It wasn’t a very good brief, it had lots of errors in it, including some spelling errors, and they had the partners rate it on a five point scale. And what the researchers there varied is whether the brief was said to be written by an African-American associate or a white associate. And when it was a white associate they gave it a four out of five, which is not great but OK. When it was an African-American associate they gave it a three out of five. One full point below.

And that’s consistent with a lot of research on racial bias, that if there’s sort of something you can seize upon to complain about or something that’s wrong, it looks worse when it’s a minority than when it’s someone who is a little closer to your stereotype of what someone looks like that. So we still see that, it’s still in there.

But the question of course was whether lawyers are more concerned about bias, and I think they are. But of course, when you’re looking at bias in their law firm and their business it’s a little different than the way judges sort of see litigants.

Stephanie Francis Ward: Right.

Jeffrey Rachlinski: That’s a little bit of a different question and, you know, if you can denigrate another partner and make more money and use their race or their gender to do it I’m not—I wouldn’t doubt that people in law firms do that.

Stephanie Francis Ward: Well, and it sounds like what you’re saying is implicit bias is probably, for most of us, very prevalent whether or not we give someone the benefit of the doubt. Which I think getting the benefit of the doubt can be huge with your career success.

Jeffrey Rachlinski: Well, you know, increasingly what we’re seeing in the literature is a suggestion that it’s withholding the benefit of the doubt. That’s kind of important to how implicit bias plays out over time in a law firm or in a business setting or the like. It’s the favor you don’t do.

We sort of see this in judges too when we talk to them anecdotally. When they talk about—you ask them, “What sentence was really difficult for you to impose?” Often they’ll have a story about some young person that they had to give, or they felt they had to give, a very long sentence to and they looked at that person’s life and it reminded them of themselves in some way or their own kids. Right? They see something in there that looks like themselves. Something about the kid’s history or where they went to high school, or where they grew up, just some aspect of it. And then it moves them. They really want to try to give the kid a break, the benefit of the doubt. They don’t see that as often in opposite race kids or opposite gender to what they’re expecting.

So it’s really the benefit you don’t give sometimes that really creates the bias over the long haul. One can imagine that in a law firm. A junior partner—junior associate turns in a brief with a lot of spelling errors and a partner sees it and goes, “Oh, you know, he was in a hurry. I remember a time when I was a young kid like him and blah, blah,” you know. And they forgive it. But to the extent the person looks different from them in some way or their personal history’s very different, they’re less likely to give that benefit. And so it creates a disparity in long run.

It’s kind of ironic in a way. We want our judges—we’d like to work in law firms where people are sympathetic and empathetic and so to see, oh, this is a mistake this person made, whether it’s having committed a crime or turned in a substandard piece of legal work, we’d love to be in a forgiving environment where someone understands that. And so you don’t want to say, well, let’s not be empathetic. But at the same time empathy is not meted out equally and that’s something we worry about a lot in judges.

One of the hardest questions judges ask me is, “Do you want me to not be empathetic? Because you’re saying that a lot of this sort of empathy and emotion that you feel in a positive way towards people in front of me, well, that that’s motivating me to be more racist.”

Stephanie Francis Ward: Right. Sure, be empathetic, just make sure you’re not empathetic to only people who look like you.

Jeffrey Rachlinski: Well, yeah, but that’s the hard thing because empathy isn’t as easily addressed with this sort of stop-and-think mechanisms we tend to advise judges to do. So a lot of things we tell judges to do if they don’t want to rely too much on, or at all, on their insidious biases, is try to rely on more deliberative techniques, things like checklists, step-by-step things, factors of decisions, things that—make them write an opinion. Things that cause them to slow down and stop and think. Those actually tend to be things that reduce empathy too. So it is an odd bit of advice.

I think the judges that are really seeing what we are telling them are the ones that are asking that difficult question. It’s why it’s so difficult. Because I don’t know quite know what to tell them. I don’t want to live in a world where the judges are not empathetic or don’t see it. I’d love them to be as empathetic to litigants that don’t look as much like them, but that is a very difficult thing to do. That’s a lot harder than sort of smoothing things out in the long run.

Stephanie Francis Ward: But so if there’s good scientific data that tells you, you know, I tend to be empathetic towards people who look and act like me and have the same beliefs from me, especially if I’m a person of privilege, which a judge likely would be. If a reasonable person knows that can’t they just kind of keep that in their head and just remember it when the next decision comes around where maybe it calls for some empathy and the defendant doesn’t look like you?

Jeffrey Rachlinski: Well, that’s what we’d like to do but we’re not—here’s where the data sort of run out on implicit bias in a way. We can get people to stop and think, not rely too much on their emotions and too much on implicit associations they have and to use deliberative techniques that will create a sort of more uniform system. We don’t really have as much in the way of data of how to produce a level of empathy towards someone of the opposite race.

There’s actually an interesting recent study on this, and it’s completely outside the legal context, where some neuroscientists have people watching someone get pricked with a pin in their arm. Looks like it hurts, like watching acupuncture close up when you’re not expecting it. And they’ve got them hooked up to an MRI while they’re doing this and what they find is there’s some, what neuroscientists call mirror neurons in our brains that we feel the same emotion that the person we’re watching feels. And so some pain emotions are triggered, some empathy emotions are triggered. They’re less triggered when it’s an opposite race person, right?

So it’s almost at a neuro level that we see empathy operating. So it’s very challenging to get judges to try to do that. We often tell judges to imagine the person’s a different race. But you see, it doesn’t quite work, right, because it’s not just imagining the person’s a different race. If they’re a different race, they probably grew up in a different neighborhood, they went to a different school. All those things that trigger empathy are different. It’s not just their race. It’s the whole collection of who they are.

And it’s one thing to say: “Well, look. Imagine your sentence. How would you want your own son or daughter to be sentenced if they were in front of you?” That’s a good way to trigger empathy, but it’s probably not going to work as well when the person, not only a different race, but has a completely different life than your son or daughter.

So I think it’s a very challenging thing to do and I don’t have—there are no easy answers to that one. We do have pretty good answers to how judges, generally speaking, can try to work harder and treat litigants in front of them of different races more equally, which is stopping and thinking. But will we ever get to a point where judges are just as empathetic? I don’t know. That may be a ways off.

Stephanie Francis Ward: I see. We’re going to take a quick break and when we come back we are going to talk about how implicit bias tests can be used to look at yourself and see how you can change perhaps.

Advertisement: These days, law firms need to do more with less. Making this happen requires efficient, cost-effective tools that work the way you do. Available as a desktop or cloud solution, Amicus Attorney practice-management software improves the organization of your firm and drives your bottom line. Visit AmicusAttorney.com to discover how you can join the thousands of lawyers who rely on Amicus every day to run their practices.

Stephanie Francis Ward: And we’re back. I’m Stephanie Francis Ward and on today’s ABA Journal’s Asked and Answered I’m speaking about implicit bias in the legal profession with Professor Jeff Rachlinski. He’s a Cornell law school professor and he’s also done various studies on implicit bias. Professor, it seems like we’ve seen a lot of news stories this year where people insist they are not prejudice but they say or do something that, to many, would seem to suggest something otherwise. Do you have any advice on using, you know, what the studies of implicit bias have found to maybe help people see that they might have some biases that are, you know, affecting how they are coming off in their decision making?

Jeffrey Rachlinski: Well, this is a difficult thing because I see this all the time at my talks as well, people who swear that they—the phrase is often “don’t have a prejudiced bone in my body,” or something to the like. And yet when they’re in my studies I can see that they’re—

Stephanie Francis Ward: That “I’m not racist but—”

Jeffrey Rachlinski: Yes, I can see that in the aggregate different people are producing different results. Somebody’s being biased. You know, we did one study of judges where we asked them, “How good are you relative to the median judge, the 50th percentile judge at this?” And we had to explain that. “How good are you relative to the median judge at avoiding race and gender bias in your decision making?” Ninety-seven percent of judges ranked themselves as better than the 50th percentile judge, right? We’re all better than average at this. And what they mean of course is “I’m not biased at all so I have to be better than the average judge.” That’s a tricky thing.

I think one of the problems here is what people imagine when they think of as a prejudice or bias. What they imagine is Bull Connor on a bridge with dogs and fire hoses. That’s the racist, right? And we’ll all agree that that was a racist. And the charge of racism is so pernicious that we all want to avoid it.

So it’s the construal of what racism is and what it looks like that leads people to say, “I’m not that,” and they’re right of course. They’re not Bull Connor, they’re not in the Ku Klux Klan, they’re not David Duke. That’s correct to say “I’m not prejudiced” if that’s your model of prejudice and bigotry. Even the term “implicit bias.” It was initially coined to try to get around that.

Stephanie Francis Ward: It is. It’s kind of a nice way of saying you’re racist.

Jeffrey Rachlinski: It was meant to be that. And partly—

Stephanie Francis Ward: Or sexist, or homophobic, it’s a nice way.

Jeffrey Rachlinski: Partly to explain it. Right, so it’s implicit bias, unconscious bias, if you don’t mean it, but yet you’re expressing it in a particular way.

Unfortunately, the term implicit bias has gotten to be in such widespread use now that even the phrase implicit bias is now something people equate with racist. So if you say to someone, “You’re harboring implicit bias,” the word harboring helps to make it look academic and clinical. But implicit bias itself now looks like racism. I’m finding that in the last year or two. That judges are resisting the phrase “implicit bias” because they equate it as a synonym with racism.

Stephanie Francis Ward: But if they don’t want to examine it, is that part of the problem?

Jeffrey Rachlinski: Well, I don’t know it’s that they don’t want to examine it. What I’m trying to do in the last year or so is instead of calling it implicit bias, just explain that’s an implicit association. To talk about, “What do you associate with what? And why?”

And the researchers at Tony Greenwald and Mahzarin Banaji were always very good at that, at saying “Well, why is it we harbor these associations?”

Well, we live in a society in which if you watch the nightly news is more likely to show an African-American face when it involves a robbery or something like that, and a white face when it’s some successful politician or what have you.

And if you see that over and over again and you see that in popular culture and in TV shows and the like, of course you’re going to have that association. Especially if you live in a socially stratified environment, which most people do.

Unless you’re surrounded by people of various races all the time, all working together and having mutual respect for each other, you’re going to have different associations. They tried to explain that it wasn’t—it didn’t make you a bad person, it wasn’t that you were harboring this sort of thing as an animus kind of thing, it’s just the associations you have.

I think that increasingly it’s incumbent upon us as researchers and scholars who write about it to make that case a little more firmly to say implicit associations, it’s how the brain works. We associate one thing with another and we like to categorize. It’s the only way we can get through life, in fact, is to rely on our intuitions and associations and first impressions.

You go to a bank teller you don’t ask, “Are you a bank teller?” The person is behind the counter; of course they’re a bank teller. You should assume they’re a bank teller.

Making quick associations and snap judgments are really how you get through life. And so if you really are slow and careful and talk about associations people have, and I think more importantly, tell them how to make decisions a little differently, you know, when it’s the kind of thing that’s really going to matter to someone like a criminal sentence or a promotion in a law firm, how to be more deliberative. It’s not just calling people racist.

I’ve actually been to a number of conferences on implicit bias where some sort of firebrand scholar says, “No, we should call people racist.” And I’ve done over 80 judicial education conferences. If I start with the phrase “you’re all racist,” I might leave immediately.

Stephanie Francis Ward: They’re not going to listen to you, yeah.

Jeffrey Rachlinski: They’re not going to listen right afterwards. And they shouldn’t. I’m not there to call people names. I do know that some people do that at these conferences. They get invited and they start with “we’re all racist.” Well, you know, I think you lose your audience right away.

What you want to do is explain the mechanism: “Here’s what happens. You know, you have this association. It’s perfectly natural to have it. It’s not your fault, it doesn’t make you a bad person. It doesn’t mean your moral upbringing was deficient in some way, and by the way not everyone has the same associations. You may be different, you may not. And here’s how people tend to make decisions naturally if they don’t try to check this bias and here’s a better way to make decisions.”

That message works pretty well on most people who are motivated to avoid reliance on intuitive implicit judgments of this sort. It’s the kind of thing that people are receptive to when they’re in an environment where they do want to be egalitarians. Which is, as I said, most of the legal profession.

Stephanie Francis Ward: Right. What do you think about employers asking employees to take, as you say, implicit association tests?

Jeffrey Rachlinski: You know, I actually kind of don’t like it and I’ll tell you why. It’s the kind of thing that people need to come to themselves a little bit more than it being forced on them. I’ve never forced judges to take an implicit association test. Now, we’ve done it in research where there’s a lot of social coercion, where they’re all in one room and they’re taking it, but that’s mostly for research purposes.

What I do at judicial education conferences is give them access to it. If you’re curious, if you are—say after this talk and it moved you in some way, then you can go and take an implicit association test. It’ll be more powerful if you decide to do it on your own than if it’s forced on you. There’s a certain amount of resistance to that kind of thing among people.

And I think what would be better is to try to explain how implicit bias works and let people figure it out for themselves. You know, an unmotivated person is not going to help on anyway. Insist on the phrase, “I am not a bigot. I am not biased. I know it. I don’t need your test.” You’re going to have to find a different way to reach that person. I think that they’re not going to do the implicit association test anyway or they’re not going to get anything out of it. I think you have to try to convince them that this is just a natural part of how people think and be a little softer. And let them come to it.

I find this repeatedly in judges, that they come to the decision afterwards. They want to do the implicit association test, they want to learn a little bit more about it, and they’re concerned about mechanisms to avoid this in themselves. Now again, judges are a different group. They’re very, very highly motivated to avoid bias, right? They have all kinds of social norms, they swear an oath, they face ethical discipline for being biased, so they, in particular, have a lot of motivation.

So it’s a little harder in an employment setting. And egalitarian norms there I think have to come from the top down. So I don’t like the idea of forcing people, but explaining why it’d be useful and making it available I think is a better approach.

Stephanie Francis Ward: So if someone wants to do an implicit bias or implicit association test, what sites would you recommend?

Jeffrey Rachlinski: Well, one-stop shopping for this one. There’s a terrific website called projectimplicit.org. It’s been created by Mahzarin Banaji who’s one of the inventors of the implicit association test and it’s run out of a Harvard website. It’s completely anonymous. They’ve had I think over four, maybe 5 million people go to this site and do implicit association tests and it explains carefully what the test is, what it does, how it works.

You can pick from any number of implicit association tests you like. There’s the most commonly done one is black/white, good/bad with faces, but there’s gender ones, male/female, and then career or family-oriented words. There’s young/old, good/bad. That one just about everyone falls into. Younger faces we associate with much more positive imagery than older faces in the IAT and that’s not even all that implicit. A lot of people will say that explicitly. And you can choose which one you like to do, it can give you your own results, it’s perfectly confidential, and it’s a very well-run site. And there’s as much research on there as you could possibly want to if you want to know more.

Stephanie Francis Ward: All right, Professor. Well, that’s everything that I had to ask you today. Would you like to add anything else?

Jeffrey Rachlinski: No, nothing else. Thank you so much for having me on, Stephanie. It was a real pleasure.

Stephanie Francis Ward: Well, this has been great, professor. Thank you so much for your time. I really appreciate it. And this has been another edition of the ABA Journal’s Asked and Answered. I’m Stephanie Francis Ward. Thank you for listening.

[End of transcript]

Updated on Jan. 26 to add transcript.

In This Podcast:

<p>Jeffrey Rachlinski</p>

Jeffrey Rachlinski

Jeffrey Rachlinski is a professor at Cornell Law School. He holds a JD and a PhD in Psychology from Stanford University. He studies the influence of human psychology on decision-making by courts, administrative agencies, and regulated communities.

Give us feedback, share a story tip or update, or report an error.