Do lawyers have a duty to be well-versed in the benefits and challenges of artificial intelligence? And if so, what does that mean as ChatGPT and other (generative) AI technologies emerge and capture our collective imagination—and collective anxiety?
The American Bar Association’s Model Rules of Professional Conduct 1.1.8, read:
To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject [emphasis added].
For Sue Hendrickson, executive director of the Berkman-Klein Center for Internet & Society, ChatGPT and AI technologies hold a lot of promise—promise that lawyers should be taking advantage of, if beneficial, as long as they are attentive to the risks and limitations. As a former partner at Arnold & Porter who co-headed their Technology Transactions and Life Sciences Transactions practices, she believes it is imperative for lawyers to adapt and invest in understanding and using new technological advancements, both out of practice considerations as well as larger professional obligations. There are many ways that AI tools can be used to effectively streamline and jumpstart legal analysis, prediction, and writing tasks. Practicing lawyers can—and should—use technology in their practice to the extent that they are able to better advance the interests of their clients in ethical ways.
There are risks—professional, ethical, and otherwise—when introducing new tools into practice.
However, Hendrickson notes, that does not mean that there are not risks associated with technological change, and this new class of generative AI in particular, admitting that she spends a lot of time thinking about cautionary tales. While she doesn’t think lawyers—or the legal profession more generally—should be scared of experimenting with tech like ChatGPT, she says, “It is essential to understand its limitations.” She continues:
Generative AI tools are not a substitute for critical legal thinking nor a reliable source of accurate and truthful information. It’s important to make sure there is enough human oversight and attention to issues of client confidences so lawyers are still meeting their obligations of professional conduct around how they integrate those tools into their practice.
Hendrickson presents the double-edged sword around generative AI in the legal profession. Lawyers who want to do the very best for their clients—and abide by the standards of the model rules—may want to explore incorporating technologies like generative AI into their practice that would make them better and more efficient. But there are risks—professional, ethical, societal, and otherwise—when introducing these new tools into practice that need to be reckoned with. A post on Lexology by lawyers at Freshfields Bruckhaus Deringer listed serious concerns to consider, including regulation (the EU AI Act); intellectual property (who owns the data input and output); liability (negligence or fraud that results from use of a chatbot); privacy (if personal data is included in large language model training data); and discrimination (AI reflects the biases it was trained on).
Elettra Bietti, a joint postdoctoral fellow at the NYU School of Law and the Digital Life Initiative at Cornell Tech in New York, points to a broader philosophical point to be wary of: the increased distance between the people making the technology and its broad swath of users means more intermediaries and less understanding. Drawing on Jonathan Zittrain’s piece, “Intellectual Debt: With Great Power Comes Great Ignorance,” she explains that when the distance between modeling inputs and outputs grows, computational and bureaucratic systems become increasingly “impenetrable, complex, difficult to govern, and untransparent.” This means that, when harm occurs, figuring out why and, importantly for lawyers, who is responsible becomes challenging.
Unless there is actual scrutiny over how this technology is being licensed and transferred to competitors and the public, what is being licensed, and the openness of this market, I think we risk reproducing our current mistakes with Big Tech.Elettra Bietti, joint postdoctoral fellow, NYU School of Law and the Digital Life Initiative, Cornell Tech
Hendrickson and her colleagues at the Berkman-Klein Center are investigating questions around AI ethics, governance, transparency, and safety, a complicated calculus when so much of AI is owned by private companies. “Immediate corporate gatekeeping is not the answer. We need to first work collectively across a broad array of relevant communities to define risks and discuss boundaries, establishing principles and methods of governance,” Hendrickson says. She continues:
With this new class of generative, widely accessible tools, we are talking not just about changes to practice norms but also about broad societal risks with the potential to affect us all, from well-documented issues of bias, to heightened concerns about the potential for manipulation, misuse, and the spread of misinformation. So when we talk about access, we should step back and ask researchers and civil society groups about which curation tools and guardrails should apply to these systems, and how and who should decide.
Platform privatization and the distribution of power on the internet is one of Bietti’s core research focuses. Will AI companies need to pay creators for including their work in the training data? Will the outputs from generative AI be considered fair use? Will small and nascent entrepreneurs be able to get a fair license to GPT technology and pioneer new innovations? These are some of her big questions. “Unless there is actual scrutiny over how this technology is being licensed and transferred to competitors and the public, what is being licensed, and the openness of this market, I think we risk reproducing our current mistakes with Big Tech,” she says.
Although these questions are broader than the legal profession, lawyers must be aware of the debates. After all, major law firms have already started using generative AI. On February 15, Magic Circle law firm Allen & Overy announced a partnership with Harvey, an AI tool built with OpenAI’s technology that allows lawyers to pose queries to “help generate insights, recommendations and predictions based on large volumes of data, enabling lawyers to deliver faster, smarter and more cost-effective solutions to their clients,” the press release read. For lawyers relying on Harvey, Allen & Overy has built in certain safeguards to avoid the GPT’s tendency toward hallucinations, including requiring lawyers to reread Harvey’s usage rules and limits upon logging in.
For her part, Bietti doesn’t see efficiency for efficiency’s sake (or billables) as a moral good. If lawyers are going to use a tool like Harvey, are they going to be handling an increasing number of cases? Are they going to have to work on more cases now that a piece of the job is automatable? Will this increase disparities between categories of lawyers and workers? She’s not against generative AI and what it can do to improve any profession. But she does want professionals to take a step back and consider the impacts it may have on working conditions and mental health. (Although a variety of studies have looked at productivity loss that occurs when workers context switch, new research also examines the stress and impact on morale that context switching can have.)
Sue Hendrickson, executive director of the Berkman-Klein Center for Internet & Society, hopes generative AI can be an “enhancement versus a threat to societal and lawyers’ well-being.”
“For me, what really changes is how we think about delegating work and apportioning tasks,” she says. “Counterintuitively, are we going to have to work a lot more because these tools are available?” Bietti says it’s both a cultural question and an industry-specific question. “Will there be different expectations in the workplace on the part of colleagues, peers, bosses?” she asks.
Work is more than the output of a task, Bietti says. “It’s how humans learn and find fulfillment. You can use AI to learn, but I think it’s important not to forget that delegating research to a system means removing the opportunity for students or junior lawyers to learn from doing the research. Those people will not be learning the same way anymore.” She clarifies, “They’re going to be learning a very different skill, which is how to work with machines to produce outputs, and how to achieve some fulfillment in the process.”
In the legal context, both Bietti and Hendrickson believe there needs to be a “human in the loop,” whether that’s a professor, a supervisor, or a partner. “You cannot take the human out,” says Hendrickson. “These tools are clearly not a substitute. Yet as they shift the nature of legal practice tasks and human contribution, we need to rethink how to train, motivate, and mentor lawyers and foster creative and analytical excellence.”
For a lawyer, however, humans in the loop means an additional complication. It might be necessary for accuracy and fact-checking, but additional questions are raised around a different rule of professional conduct: confidentiality. Both the data used to train ChatGPT (or its attorney-centric analogue) to be effective, as well as what kinds of questions attorneys are asking it to respond to, are—potentially—being fed into a system that aggregates that data not only for use in conversations with others but also where there may be a technologist on the back end. As Law.com has pointed out, client information entered into ChatGPT is no longer confidential.
[When it comes to new AI tools,] there are intellectual property laws and regulations, where lawyers will be essential. But there are also ethical, technological, and social issues, and these debates need to be informed by impacted communities, technologists, ethicists, interdisciplinary scholars, and others.Sue Hendrickson, executive director, Berkman-Klein Center for Internet & Society
Hendrickson believes that if the legal profession is going to rely on AI for practice, there needs to be a proper understanding between clients and lawyers about both the confidentiality gaps and the level of human review of AI outputs. Thinking back to her transactional work involving sensitive information, she says when you deploy technology in legal practice you always have to be aware of what kind of data and insights you’re giving the program to generate the kind of output you need, where that data flows, and how it is able to be used or monetized.
She provides an example of inputting trust and wills information—often very personal information—or business trade secrets to generate a contract. Questions one might have to ask are: “Is it only being used for that contract? Is it being used for training? Are the data or insights gleaned from it being shared with others? What if any confidentiality applies? Does deidentifying or anonymizing the data or only using it for training—common proposed limitations—actually satisfactorily solve the personal, business, and data security risks?” she posits. She adds:
And how do you manage that as a law firm? We would want to put in place agreements with clients around permitted use as well as agreements with the providers of those technologies as to what specific uses are permitted. From a data privacy and trade secret perspective you might want a closed environment, but the reality is that given the market power of many AI, software, and cloud service providers, the protections that can be negotiated by users are limited so valuable data and insights often flow to vendors able to monetize and use them in a variety of ways.
Likewise, Bietti, who used to practice antitrust law, has questions about the trade secrecy and market collusion issues that could result from widespread use of generative AI chatbots.
On a global level, regulation is of course behind. “Since the mid-2010s, hundreds of institutions—from the world’s tech giants to the Catholic Church or international organizations like the UN—have released non-binding guidelines for how to develop and use AI responsibly,” a recent article in Deutsche Welle said. Lilian Edwards, a professor of law, innovation, and society at Newcastle University, suggests, however, that voluntary might not be enough. But where should regulation start? Bietti believes we cannot “delegate all decision-making to individuals on complex systems of this sort.” She suggests industry-specific governance rules, with redline rules whereby different data sets or outputs can be banned depending on the context. Despite critiques, she believes that “the European Union is a little bit more advanced than other regions in thinking about how to govern AI in a more systemic way.” Hendrickson adds, that given the borderless nature of these technologies and the different regulatory approaches that are emerging, it will be important to develop common standards and best practices for third party evaluation of generative AI tools and uses.
Hendrickson encourages lawyers and law students—and everyone—to approach generative AI thoughtfully as well as skeptically. She wants to make sure it’s an “enhancement versus a threat to societal and lawyers’ well-being.”
How to make that happen? Lawyers are going to have talk to people outside the legal profession. As we engage with new technological advancements, Hendrickson says, discussions about whether or not to adopt generative AI will require more than pure legal analysis. “There are intellectual property laws and regulations where lawyers will be essential,” she says. “But there are also ethical, technological, and social issues, and these debates need to be informed by impacted communities, technologists, ethicists, interdisciplinary scholars, and others.” The challenge for the legal profession will be to feed these complex ethical debates into rules of professional conduct—and for practicing lawyers to have the knowledge and confidence to use technology in responsible ways.