The Cost of Judgment

Speaker’s Corner From The Practice March/April 2022
A conversation about technology and decision making

Dana Walters, associate editor of The Practice, recently sat down with Roshni Raveendhran, assistant professor of business administration at University of Virginia’s Darden School of Business, to talk about the psychology of behavior and decision making.


Dana Walters: In this issue, we are focused on judicial decision making. Within your own research, how do you think about and teach the psychology of decision making?

Roshni Raveendhran: My research really broadly focuses on understanding how novel technologies influence people and vice versa. In this context, I’ve examined the psychological processes that underlie people’s decisions to adopt technologies—like, say, behavior-tracking technologies, or artificial intelligence in certain domains, or virtual reality, and so forth—and how their decision making is impacted once they adopt them.

A common theme that I like to emphasize is the focus on how people’s everyday social experiences are fundamentally changed when interacting with many of these novel technologies. When thinking about the psychology of decision making in the context of technology, we need to consider the psychological impact these technologies have on people and really highlight the ways in which we can start leveraging these new technologies to create positive impact. Because my perspective is whether we like it or not, these technologies are already spreading to many different domains and are in a lot of these places where we’re making decisions, including courtrooms.

We use algorithms to make decisions in many, many different domains.

Walters: I’m wondering if you could provide some examples of the ways in which technology impacts decision making more generally, as a way to start thinking about what it means for technology to enter the courtroom.

Raveendhran: This comes into play when we talk about algorithmic decision making, which we have to examine when we think about judicial analytics. We use algorithms to make decisions in many, many different domains. For instance, in the business world, we use them across a number of domains, including hiring, entrepreneurship, or who should be funded. I wrote an article for MIT Sloan Management Review that focuses on how funding decisions are made and how algorithms are now being used quite extensively to decide how to make those funding decisions.

And one of the things we tackle in that article is, as these systems are increasingly being used, is there a way for us to start thinking about how they can reduce bias against women entrepreneurs? Could we leverage these technologies positively? Could we structure them at the right places during this decision making process so that we can use them to make sure that women entrepreneurs are not getting overlooked by VCs, who might be more used to male entrepreneurs, if you will?

Walters: You brought up judicial analytics, and I want to dive into that. In one of our stories for this issue, we focused on judicial analytics—tools that aggregate and visualize information on judges. Some question the impact such tools might have on the behavior and decision making of judges if they know they are being tracked. How would you put this issue into perspective, perhaps by characterizing the way such tracking technologies broadly impact human behavior?

Raveendhran: I think about any technology as a tool, and whether that technology has a positive or a negative impact on how we behave really depends on how we implement and understand those technologies. Algorithms enable us to draw important conclusions about other people or ourselves based on large amounts of data, but we have to ask, What is that pool of data? One thing to remember is that we as people can experience the tracking context itself both as informational—providing useful, valuable feedback about our own behaviors—and as evaluative—where we’re being constantly watched and monitored and judged.

We would all like to know that about our own behaviors. But it becomes thorny if we know that there’s a human on the other side who might be watching and judging.

My research really looks at how technology can come in and help experience this tracking context as one that’s informational but not evaluative. So, if you think about judicial analytics, for example, it could be a useful tool for judges to be self-reflective using the informational valence. If judges understand that there are some things that they’re clicking and some pieces of information that they regularly look for, or some people that they constantly go to talk to about some of these things, or certain books that they definitely will go refer to, they may consider those strategies in a new light. So, if there are some behaviors that people know about themselves that they’re doing anyway, but they would like to change them, this can be valuable. They may think: “Oh, this is where my bias is coming from, or this is where I’m actually being very, very efficient. Maybe I need to streamline the way I make these decisions by doing more of this and less of this.”

We would all like to know that about our own behaviors. The place where it becomes really thorny is if we know that there’s a human on the other side of the technology who might be watching and judging. My research shows that when you put algorithms in place and reduce the salience of humans in that process, we can really foster the tracking environment as one that is informational and not evaluative. I think the key takeaway would be: Can we use these programs to ensure that we get more accurate, trustworthy information about our own behaviors without the cost of judgment? Would it be helpful if some of the data that’s coming from these types of programs would allow judges to have more thoughtful interpretations of their own behaviors based on data, as opposed to figuring out if somebody else is going to look at this data. This is all tricky in this context, but these are some of the questions I would ask.

Walters: That’s really interesting. It sounds like that calls for a difference in how these tools are being marketed. Right now they’re being bought by law firms or general counsel to understand the litigation strategy, often vis-à-vis a particular judge. But what if it’s marketed toward judges as well? On a related note, what should lawyers and judges know about surveillance and related technologies, as they’re increasingly being asked to rule on them, either in workplace or in criminal justice settings? As an example, in France, judicial analytics—which might be thought of as a form of external surveillance—was banned as recently as 2019.

Raveendhran: I think the complicated issues with tracking really start with people being surveilled. In my research, I carefully use the word tracking rather than surveillance. I can track my own behaviors in order to make changes to them and to improve and to do better. But I can’t surveil myself—somebody is surveilling me. There’s always this other party that’s looking in when there’s surveillance happening. The unfortunate thing is the minute people get hands on tracking technology, the first use case that comes to mind is surveillance. It’s really “How can I implement this so that I can watch over somebody doing their job?”

Is the tech being implemented as tracking or surveillance, which really depends on consent?

In fact, it is going back to what I was saying in the other question, putting a human to watch over somebody else’s behavior at any part of the tracking process will make the process evaluative from a psychological standpoint, and it’s going to make people feel less motivated and aversive to being in those situations. One extreme example is China’s social credit system. Everybody is being constantly watched, and every single online transaction is being monitored. Every single step that people take within their society is being tracked because you can now track. The problem is it’s not information that people are voluntarily giving up about themselves so they can change and improve their own behaviors.

It is government looking in and creating a social credit system that would tell you, this person’s a good citizen, or this person is behaving in ways that are not OK, so we have to take points off their social credit score. So, it’s a very obviously evaluative system, which undermines people’s autonomy in very severe ways. So, I guess to answer your question, one thing judges can think about is the purpose of tracking. Could it actually be implemented as tracking rather than surveillance? And even if we want to peel back a layer, is it being implemented as tracking or surveillance, which really depends on consent? So, without people’s consent, if something’s being implemented, that is you surveilling somebody else. And that carries a very weighty implication that needs a very weighty response.

Walters: Following up on that, within your research, how important is it that people understand the behavior-tracking technology? For instance, Loomis v. Wisconsin is a very famous case where risk-assessment software was used in sentencing, and the case really hinged on the fact that the software in this case, COMPAS, was a trade secret—not even the judge had access to the algorithm. I’m wondering, do you think it’s important for people to have access to what the technology is actually doing or to see the algorithms?

Technological literacy becomes very important so that judges can make an accurate, informed decision about where exactly technology should come into play.

Raveendhran: Yeah, that’s an important question. Many industries, from medicine to legal, are being driven by algorithmic decision making at some point or the other. It’s a matter of how much rather than whether. And so, some level of technological literacy is certainly important. What I will emphasize though is it’s not just about understanding how the technology works; it’s about understanding where we should implement it and how much of it we should rely on. Do we want to rely on it at the beginning of the process while we’re screening through tons and tons of information? Or do we want to rely on it to make the ultimate decision? And that’s what we need to decide. I believe that there is a huge potential for positive impact if we think about human-technology collaboration.

Now, if judges have to read through thousands of papers and cases and lots and lots of information and process them before making a decision, that’s very, very time-consuming. But technology could decide which is the most relevant thing to read. What are the most recent cases that you need to look at before you make some decisions? So, could we use technology in that part of the process where it allows you to sift through tons and tons of information, narrow it down to the most relevant ones that you need to read so you get to a point where you are able to then say, “OK, now that I have this information. Can I use this to create a framework to help me make my decision?” Technological literacy becomes very important so that judges can make an accurate, informed decision about where exactly technology should come into play.

Walters: I see that. I think you hit on this already a bit, but to ask in a more direct way, on the other side of this, as AI infiltrates the justice system, what are the ethical questions that come up for you when we think about the injection of novel technologies into a profession like judging where decision making is often so “human”?

Raveendhran: There are two things that I would love for people to be aware of. One is how AI functions is important to understand. The problem now is that popular discourse focuses on, “Oh, AI is bad or AI is good.” And people are forced to choose sides and say, “This is what I like, or this is what I don’t like.” Research has shown that there are two main perspectives in this area: algorithm aversion is the perspective that people actually don’t like algorithms to make decisions, whereas more recently, some very influential research has shown that actually some people prefer algorithmic decision making in some common domains. But the thing to remember is it’s not about whether algorithms are good or bad—or if technology is good or bad. What’s important to focus on is how do AIs function? They function based on existing data. One important ethical consideration is how can we make sure that existing biases in data that already are being used to train AI don’t perpetuate as we start using these systems? Either, as I was saying before, in collaboration with human decision making, or if we’re using them independently for different types of decisions, the most important thing to consider is how do we ensure that biases don’t seep into the data that we use to train AI?

Whether algorithmic decision making is accurate or trustworthy, it’s also important to consider how might people psychologically be affected by decisions that may be made by technologies?

The second thing I would say is it’s important to think about the people who are impacted by decisions made by algorithms. In one example, if AI is being used to make hiring decisions, people could be in this algorithm aversion camp and categorically believe that the tech will not be helpful or accurate and only help the team hire or fund the same types of people. The focus is on accuracy and trustworthiness. On the other side, some might believe that algorithms will help take away the human biases. That’s true too. But then what goes missing is who’s being impacted and how are they being impacted? From the perspective of the candidate, some very recent research has shown that even when decisions are unbiased, people actually feel those decisions are unfair. They may be unbiased. They may not be coming from biased data. You may very well provide opportunity to somebody who didn’t have opportunity before, but because these decisions are made by algorithms, people may perceive them as unfair because they think algorithms actually reduce you to numbers and they don’t consider you holistically as a person or as a human being.

Despite arguments and questions people might have about whether algorithmic decision making is accurate or trustworthy, which is an important discussion to have, it’s also important to consider how might people psychologically be affected by decisions that may be made by technologies? Are they perceiving them as fair? Are they perceiving these decisions as ones that considered them as whole people, ones that empathetically took into consideration their pain? Are these decisions ones that really focus on how people function in our society? Or is it just looking at data and numbers and reducing them to numbers?


Roshni Raveendhran is an assistant professor of business administration in the Leadership and Organizational Behavior area at the University of Virginia Darden School of Business. She is also a Faculty Fellow affiliated with the Batten Institute for Innovation and Entrepreneurship. Roshni received her Ph.D. in business administration (management) from the Marshall School of Business at the University of Southern California.

Event

Leadership in Law Firms

Executive Education
April 28, 2024 - May 3, 2024