A Conversation with John Bliss on the Benefits and Risks of Generative AI for Lawyers

Insight February 14, 2024

John Bliss is assistant professor of law at the University of Denver Sturm College of Law, as well as a current affiliate at the Center on the Legal profession. He sat down for a conversation with Dana Walters, associate editor of The Practice, to talk about his research on the use of generative AI in legal education and legal practice.


Dana Walters: Your early work investigates professional identity in the legal profession and movement lawyering. Recently, however, you’ve been examining AI and its implications for lawyers. Could you talk about this pivot—if it is a pivot—and how it connects to your earlier work?

John Bliss: Yes, thank you for the question. It is a pivot, in recognition that we’re living in transformative times. Generative AI is already going very mainstream in the profession through the everyday tools of legal research: LexisNexis and Westlaw both use GPT-4 and other LLMs to answer legal questions and draft legal documents. It’s also going mainstream in word processing and internet search. I’m interested in what this all means for lawyers today and where this tech is heading in the future.

But in some respects this is not a pivot. As with my prior work, I’m looking at how lawyers can do the most good, and how the lawyer role opens up some possibilities but also constrains the kind of impact lawyers can have. It’s very plausible, given recent research on legal AI capabilities, that these tools could make lawyers more efficient, and with efficiency could come greater access to legal services. And then there are the more seemingly sci-fi scenarios, which I think we need to take seriously, where AI might be capable of doing a lot—or maybe all of—what lawyers do. And if that happens, you could get more access to justice. But there’s also a great deal of concern about what that means for lawyers and the legal system. A lawyer is more than a legal advice dispenser—they’re also a moral agent. They stand in as a representative of the legal system and a member of the political order that helps connect clients with their rights and the legal system.

Walters: Unsurprisingly, as you have jumped into this space you’ve done it from every angle possible—from teaching to writing to even helping develop a new nonprofit that teaches lawyers about generative AI. I’d love to learn about this suite of activities, starting with teaching. How do you teach something that’s constantly evolving? Could you talk about your class and approach? And how your new paper, “Teaching Law in the Age of Generative AI,” where you actually surveyed students and faculty about their generative AI know-how, fits in?

Headshot of John Bliss
John Bliss

Bliss: Yes, the paper is meant to be useful for law teachers dealing with this dilemma about how to integrate AI into their teaching. I conducted surveys of my own students who had taken a first-year property course where I had them use generative AI for some assignments. And then I also did a national survey of law faculty and received about 150 responses.

The students pretty consistently described an initial sense of awe about generative AI—upon first meeting ChatGPT and similar tools, the AI seemed an almost omniscient legal mind capable of writing very polished responses to our course assignments. Then the students gained some first-hand experience using the tech in their writing. Their initial awe was often replaced with disappointment. The AI was making mistakes, or really, if you know the law well enough, it wasn’t quite citing the right sources or making the strongest arguments. But later in the semester, the students seemed to settle somewhere in between this awe and disappointment as they found ways to use generative AI in their legal research and writing workflow. For example, students emphasized that AI tools were especially helpful with brainstorming, considering counterarguments, and overcoming the tyranny of the blank page. As one student put it, the AI can serve as “another brain,” or, as educational scholars put it, an interlocutor—someone to talk to throughout the learning and writing process. More generally, the students had a lot of positive things to say about how using generative AI aided in their legal learning, by requiring great attention to detail in their prompts and their assessments of AI outputs.

Lead Article Profile view of a head using puzzle pieces.

The faculty I surveyed seemed less familiar with the leading AI tools. They were very supportive of the idea that we should be teaching students how to use this tech and how to weigh the risks and benefits, but they also felt very ill-equipped to provide that instruction themselves. My article is meant to help instructors decide whether we should integrate this technology in different aspects of the legal curriculum. I don’t think this is a one-size-fits-all situation. Reasonable minds can disagree, and there are good reasons to limit the use of this technology on some assignments. I lay out a wide range of considerations, including the importance of preparing practice-ready graduates with the new professional competencies that are emerging along with AI advances.

Walters: What sorts of concerns did faculty have?

The faculty were pretty concerned about learning, although the students I surveyed mostly said that using AI aided in their understanding of legal doctrine. The faculty were worried that students would over-rely on AI outputs rather than doing the work themselves. But I found that the students were iterating with the tech; they weren’t just relying on the first answer. They would go back and forth to improve their prompts and see if they could obtain better outputs.

The article also outlines some specific AI-integrated assignments and exercises, where students can use generative AI in the research and writing process or where an instructor brings this tech into the classroom.

The faculty were also very worried about academic integrity—for instance, if you give students take-home assignments and tell them not to use generative AI, some students might use it anyway, which I think is a very valid concern. The technology is already pervasive, especially if students have access to the generative AI capabilities in Google and Bing, Microsoft Word, Lexis and Westlaw, and other apps.

A lot of faculty have suggested that they can AI-proof their exams by tailoring the exam to the specific course content. This can actually backfire because the latest research on AI performance on law exams showed that if an instructor inputted their own teaching notes, the AI produced answers as high as an A- or an A.

Walters: And then I assume if you are using it on an open book exam, it may actually be that you’re learning the material in the iterations with the prompts, kind of like how a student might produce a notecard for a test, but in the process, learn the material.

Bliss: Exactly. I used to give students this example of an old Simpsons episode where Bart is staying up late the night before a US history exam, and he’s writing his notes on the bottom of his shoe in order to cheat. And then, when he shows up to the exam, it turns out he doesn’t need to consult his shoe, because he accidentally studied for the exam by writing up his notes. With generative AI, students can learn a lot from the process of interacting with the AI, especially if they’re encouraged to iterate and figure out the kind of information the AI needs in order to produce a good answer, and then carefully assess the AI outputs for quality and accuracy. This is probably a preview of the workflow that you’re going to see increasingly in legal practice where lawyers are using generative AI as a collaborator or an assistant throughout the process of research and writing.

Walters: So it sounds like teaching this, you take an applied approach, using exercises and having students engage with it. That leads me to ask about your nonprofit, which is certainly trying to meet a need and fill a gap of understanding on the faculty side, but also on the practitioner side. Could you tell us about this nonprofit and what it’s trying to achieve?

Bliss: The organization is called the Oak Academy for AI in the Legal Profession, and we train and educate lawyers about state-of-the-art legal AI. We were motivated by our perception that most of the discussions around generative AI in the legal profession are based on outdated and inaccurate information. It’s a fast moving field. And many lawyers are afraid of this technology, worried that they won’t use it competently. Some see headlines about hallucinations or they’re worried about confidentiality in these systems. But many of these fears are unfounded, and much of this discourse is deeply uninformed. So there’s an important educational mission here in the programming that we’re developing to bring practitioners up to speed.

We start by discussing the leading research in legal AI. Often, this research is reported in misleading ways. For example, there was an article recently that suggested an extremely high rate of hallucinations on legal questions. But I think a lot of readers, as evident in mass and social media, didn’t realize that this was not a study of leading AI. It was a study of GPT 3.5, essentially last year’s technology. It was a rigorous and well-designed study and the findings are illuminating. But the hallucination rates are much lower in GPT-4 and the other LLMs that are going mainstream in legal tech applications. Lexis+ AI claim to be hallucination-free. Westlaw’s CoCounsel also provides real legal citations. So a big part of what we’re doing is correcting these misconceptions, including setting the record straight on where AI’s capabilities tend to be underestimated and overestimated.

Eric Martinez, who’s on the faculty of Oak Academy, for another example, took a close look at the claim that generative AI scored in the 90th percentile on the bar exam, which seems to be based on a February administration of the Bar exam, where repeat test-takers are prevalent.  Among first time test takers, it’s actually only a roughly 60th percentile performance, and below 50th percentile on the essay portion. That’s very different from a world where 9 out of 10 lawyers are outperformed by a machine in an established, albeit deeply flawed, metric of legal knowledge and writing proficiency.

Our program has leading researchers on legal AI, including Dan Schwarcz, professor of law at the University of Minnesota, who has conducted studies of AI performance on law exams and on some real legal tasks, including a randomized controlled trial. And then we have some folks provide tech demos. At our first session, this included vLex and Everlaw. And we discuss how to select different AI tools for different purposes. I give a talk about professional responsibility and how legal scholars have thought about AI and the future of the legal profession.

Walters: I want to close with one last question: when you look at the landscape of what’s coming with legal profession and AI, what are you excited about, or hopeful for, and what are you scared of? Maybe they are the same thing.

Bliss: I’m excited about the possibility that we could radically expand access to justice by making lawyers more efficient or by putting powerful legal AI technology in the hands of people with legal problems, especially people who otherwise wouldn’t have access to the expensive and scarce resource known as a human lawyer. I’m also excited about the possible well-being benefits for lawyers. If AI removes a lot of repetitive and mundane tasks and leaves us with some more interesting strategic and interpersonal work, I think that could point toward a hopeful future.

On the other hand, I was with a group of law students at lunch today where someone said, “I’m scared of AI. I feel like it’s developing too fast, and we’re not ready for it, and the people in control of developing it don’t represent society’s interests very well.” And you know, I hear that. And I think that there are some grave risks where AI spreads misinformation and bias and produces low-quality outputs that could diminish legal services. I don’t even dismiss the risk of catastrophic and existential threats to humanity—I recently wrote an article about the lawyers working on these issues of AI safety. The future of AI is uncertain, but many think that humanity may soon hand off the mantle of being the most intelligent beings on the planet. So, I suppose my range of hope and despair is wide.

But, rather than ending on a note of doom, I want to say that I’m really focused on how this tech can be useful for lawyers. To this end, I recently started a blog on the latest empirical research on legal AI capabilities. I have a post there discussing the possibility that we are on a rapid ascent toward legal AGI—artificial general intelligence—while noting that it’s also very plausible that advances in AI will slow down in the near future. We’ve had many of these cycles of hype with AI for a century now. I think, in the end, it’s really important to have an accurate understanding of where we are in order to adjust our expectations around this technology and to effectively integrate it in legal practice and legal education, and to really understand the risks and benefits.

Walters: I think that’s a good note to close on—thank you for your time.

Event

Leadership in Law Firms

Executive Education
April 28, 2024 - May 3, 2024