Assisting Knowledge Workers

Speaker’s Corner From The Practice March/April 2023
A Q&A about AI's possibilities

Jason Boehmig, CEO of Ironclad, recently sat down with David Wilkins, the faculty director of the Center on the Legal Profession, on how technologies like ChatGPT are being used in the legal profession and why lawyers can’t sit on the sidelines of technological development.

David Wilkins: At Ironclad, your latest product incorporates OpenAI’s GPT-3 technology. Could you talk a little bit about how it’s being used and why you’re excited about it?

Jason Boehmig: Let me start by saying that this is the most exciting time of my career. I’ve always been excited about the intersection of law and technology, but I didn’t even foresee how impactful this particular technology was going to be. OpenAI is a customer of ours, and OpenAI, of course, makes ChatGPT, GPT-3, and soon-to-be GPT-4. It’s pulling forward our product road map by three to five years.

AI Assist is our flagship product incorporating generative AI. It helps lawyers make better contracts. Even the naming—AI Assist—helps put forth our view of where this technology should sit in relation to the profession. It should assist knowledge workers in making better decisions. This is not a hypothetical product. There are real lawyers using it in their actual course of business.

One of the things we’ve always found with respect to AI in the legal profession is that lawyers set a very high bar.

Jason Boehmig, CEO, Ironclad

Usually, a lawyer will train an “algorithm” of how their business does something in their head. Say you start as a lawyer at a Big Tech company. It’s day one. You don’t really know how they negotiate things, and you pass it to a more senior attorney, who gives you feedback. Over time, you train yourself with your own internal “algorithm” that tells you: “Oh, when I’ve seen this type of clause get negotiated before, we usually say this.” And that internal algorithm gets better and better. Of course, if you ever decide to leave that company, that algorithm walks out the door with you.

Now, if that company is using AI Assist, and you’re a new attorney and you get a red line back, instead of asking the senior lawyer, you would ask AI Assist, “How does this company normally respond to that type of red line?” And it would respond, “Normally, this company would never agree to this or that.” I want to be clear here: AI Assist is not advising you to agree to this or that. It’s giving you facts about the way the business is run that can help inform a better decision.

Wilkins: Could talk a little bit about why OpenAI or GPT-3, let alone GPT-4, differs from the kind of AI you were using before? One of the things that has always distinguished Ironclad was that you were early to the party in using AI in this kind of contract formation. But what makes this so different?

Image of Jason Boehmig leaning back against a couch in front of some greenery in an office building.
Jason Boehmig

Boehmig: One of the things we’ve always found with respect to AI in the legal profession is that lawyers set a very high bar. You have to get to almost perfect levels of accuracy for it to be useful. We invested a lot of resources in getting close to that. I’ll give you one example from another side of our product. Say you already have a completed contract, in the form of a signed PDF, and you want to turn that PDF into structured, tagged data. In a previous world we were doing pretty much everything internally ourselves. It took us a long time to add new fields that Ironclad could recognize with a degree of accuracy that we felt our clients needed, which was generally 99 percent or higher. Over the course of around two years, we developed 20 or so fields, all homegrown, where you could feed a coffee-stained PDF in and Ironclad would recognize, with 99 percent accuracy, that a particular section was an indemnification clause, for instance.

With the help of OpenAI, we are shipping a release this week that adds 179 new fields, which Ironclad can recognize instantly.

If you would’ve asked us six months ago how long it would take to add 179 fields, the answer would’ve been years. That answer is now weeks. My nonengineer but technologist understanding of what GPT-3 is doing is it’s effectively a very good summarizer of information. It can take an extremely wide variety of data and summarize it into whatever “format” you want. It’s also good at recognizing commonalities among very disparate data sources. The effect is that you need less training data to make an effective model. Before this technology, you needed millions of data points to train a model. Now you only need a couple hundred good data points.

What this has done is changed the way that companies build technology. Companies typically have been working on the “how you ingest a PDF and extract data from it?” model. The previous best way to do that was to basically try to get millions of data points—often manually, by having people reading it and tagging it—such that over time your model would get good enough to recognize more and more.

Now what you can do is feed in a couple of hundred data points to a GPT-3 type of system and it will start recognizing the indemnification clause across only a couple hundred data points, and then start rapidly extrapolating more. That explains why we were able to add 179 new data points so quickly.

It also explains why our strategic bet on contract creation is key. Contract creation occurs in just about every industry you can imagine. As an example, we just signed up a major hospital. I start to get really excited when I realize that we are helping hospitals become more effective operations through this type of AI and process automation. It shows you the power of our profession and the legal industry, and how important contracts are, that we’re really the bedrock of every industry and every endeavor. AI just accelerates that to a point that even the cutting-edge folks didn’t think was possible six months ago.

Wilkins: You said something early on, which I think is so important, in how we think about legal tech in the profession. You said the bar the profession typically wants from legal tech is really high: 99 percent accuracy. Of course, did anybody do a field test on the error ratio of blurry-eyed associates creating or reading contracts in windowless warehouses in Arizona in August? Answer: no. But it does raise an interesting question that I think people are struggling with. We used to at least imagine we knew what “competence” was for lawyers. We had some idea about what lawyers should know—and what they should learn. But this is now throwing a whole different set of questions about what it will mean to be a competent lawyer working with this kind of technology.

We—lawyers—must maintain the responsibility for being the ultimate arbiter of what is in the best interest of our clients.

For the foreseeable future, we’re talking lawyers working with technology—and even more than that, lawyers who are using technology that they are not going to fully understand. What does it mean to be a competent lawyer working with a GPT-like product? What should lawyers have to know?

Boehmig: That’s an important question. My understanding of the spirit of regulation is that lawyers need to be responsible for the technology that they use as part of their practice. And that to me does seem like a great guiding principle for what competency means in this case.

I don’t think lawyers need to learn the inner workings of GPT-3. But I do think that they need to assume responsibility for the implications of using that technology. I think it would be malpractice for a lawyer to type a question into ChatGPT and copy and paste that answer without doing some diligence into it into a client brief or a business contract.

We—lawyers—must maintain the responsibility for being the ultimate arbiter of what is in the best interest of our clients. And just like I wouldn’t expect a lawyer to copy and paste from a Google search result without having the competency of checking it, I don’t think lawyers need to go deep into the inner workings of ChatGPT. But I do think they need to maintain their ability to independently think about the results of that technology.

I feel a little bit alarmed that I’m only one of a handful of lawyers that has an understanding about what we are making. Some of these debates rely on technologists to take responsibility for the tools we develop for the profession. That is why I’ve trained my entire design team to make sure that we’re presenting facts to attorneys and that we’re not designing technology that tells lawyers specifically, “You should do this.      We never want you to do this or that,” from an AI recommendation. We’re saying, “We have seen this clause in 7 percent of documents.” We’re not saying, “You should not have this clause in your document.” It’s a very fine line that is lost on most folks, but is very important to maintain.

Wilkins: Part of the problem we’re having with legal tech is that we have, on the one hand, a group of producers who know a lot about technology and almost nothing about law, and they’re basically building hammers, looking for nails. They say we have this cool new thing, and lawyers are completely backward and still writing with quill pens. And then we have a bunch of people who are buying legal tech who know a lot about law and who know nothing about technology, and they’re just seduced by shiny new toys.

Lawyers are never going to become technologists, but they might need to know the right kinds of questions to ask the technology. And to do that, they have to have some semblance of an understanding. So my question is: How do you bridge that gap?

AI is never going to replace lawyers, but lawyers who use AI and lawyers who use technology are absolutely going to replace lawyers who don’t.

Boehmig: One of the interesting things about AI is the degree to which it changes everything. It changes the interfaces for software. It changes what technology is capable of. And I do think that one of the trends that we’re seeing at the beginning of this revolution is what OpenAI is calling “prompt engineering.”

There’s a real art to prompt engineering. Lawyers are used to writing a very complicated string of characters into Westlaw that produces the specific result we want to see. And there’s a huge difference in the quality of the result that you can get when you’re writing a prompt to an AI engine that allows that sort of interface. It’s like the difference between getting a Van Gogh painting and a crayon drawing, depending on how specific you get about what you’re asking for.

It’ll be interesting to see how these issues play out, and I don’t yet have a viewpoint on how specialized prompt engineering becomes. I could see a world in which you’re working with a contracting platform like Ironclad and folks get good at prompt engineering. I could even see a world in which that’s a full-time job at a law firm. But I could also see a world in which this just becomes a normal technological skill for people, just like we can all write search terms and we know to put quotations around something if we want those specific words.

Wilkins: This really goes to a point that as somebody who’s been a pioneer in this area, you certainly have confronted at every turn: the whole resistance of lawyers to think that some part of what they do could be done by some kind of process or machine. We wrote this wonderful case study about the incredible Mary O’Carroll, chief community officer at Ironclad, about when she was in charge of legal operations at Google. We called it the “80 percent solution.” Lawyers are happy to let you play around with 20 percent of their jobs, which they hate because it’s boring and repetitive. But the minute you start getting into what they consider the core of their jobs—the 80 percent—they get both skeptical and scared and there’s a lot of resistance. And that has stopped a lot of innovation. I wonder, first, do you see that as a danger here and, second, if you do, what kinds of things are you doing to try to overcome that?

Boehmig: The thing that I often say is AI is never going to replace lawyers, but lawyers who use AI and lawyers who use technology are absolutely going to replace lawyers who don’t. And I think that’s a hundred times truer with AI than it is for some other technologies. I’ve always said that we want to make lawyers 10 times effective. There’s a world in which that might be more like a thousand. I don’t think we know the full extent of it yet, but I feel comfortable saying that’s within the realm of possibility. And so that’s an entirely new way of thinking about how you can practice law.

Some of the assumptions I think we take for granted might not hold true in this new world. The existence of a large law firm usually happens because you have one or two rainmakers who bring in all the business and the rest of the firm who does all the work. What if you don’t need the rest of the firm to do the work? It starts to get pretty sci-fi and potentially dystopian, but also potentially utopian in that maybe lawyers don’t need to do the stuff that makes the profession [have] one of the lowest job satisfactions on the planet.

We need people that care about the profession and see the value of lawyers as part of this ecosystem. I really do think that there’s an alternate version of this that goes badly, but it doesn’t have to.

I am both an optimist and a pessimist about the potential impact. But I do think that lawyers who don’t adapt to this are not going to survive.

Wilkins: There’s been a lot of talk these days about the dangers of runaway technology and existential risk. Some of it is really sci-fi, but some of it is just encapsulated in what you just said: if lawyers become 10 times more productive, there might be an argument that we need a lot fewer lawyers and that the whole structure of the legal industry would fundamentally change.

I wonder how you think about that as somebody who trained as a lawyer and practiced as a lawyer, but now embodies the transformation of the profession. And, as part of this, could you say a little bit about what should we in law schools be talking about in that context? How should we be raising these kinds of complex challenges and trade-offs that are accelerating in a way that we can no longer can—or should—ignore?

Boehmig: First, I think we need to think more broadly about how we can impact the profession. I don’t think I realized this when I decided to quit my law firm job and started in this space. I still very much consider myself part of the profession. I keep my license up and I care about it. We need people that care about the profession and see the value of lawyers as part of this ecosystem. I really do think that there’s an alternate version of this that goes badly, but it doesn’t have to.

So, I think we need to broaden our mindset around how to advance the benefit that lawyers provide to society.

And I also think it’s a time for optimism and excitement. There are multiple Supreme Court cases coming up on the intellectual property of AI, and there are so many interesting novel questions. It is a fascinating time to be involved, and we do need people arguing those cases that understand the technology. And it could go either way. We could develop something really dystopian, or we can develop something that’s great. And I don’t know how that’s going to go.

The future is being written over the next decade or two, and helping people understand that is important. But this can be very energizing for the profession, and I remain optimistic about the overall impact and what it looks like for your students’ careers. It doesn’t look like “go to a big firm” or “try to tough it out as a solo practice” as your only career options. The world is wide open right now, and everyone can make a huge impact on society by participating in this. I’m excited.


Jason Boehmig is the CEO of Ironclad. He was previously a corporate attorney at Fenwick & West. He has also been an adjunct professor at Notre Dame Law School and given guest lectures at universities like Harvard, Stanford, Yale, Cornell, and Northwestern. He holds a J.D. from Notre Dame Law School.

David Wilkins is the faculty director of the Center on the Legal Profession.

Event

Leadership in Law Firms

Executive Education
April 28, 2024 - May 3, 2024