Being a Competent Lawyer in the Age of Generative Artificial Intelligence

Insight November 6, 2024

A simple personal injury case Mata v. Avianca became famous last year when a lawyer filed a brief citing precedents on why his client’s case should not be thrown out. Upon looking closely, the opposing counsel argued that some of these precedents could not be located. When confronted by the Court, one of Mata’s lawyers insisted he had been “operating under the false perception that [ChatGPT] could not possibly be fabricating cases on its own.” He admitted, “if I knew that, I obviously never would have submitted these cases.” As I was reading this case, I realized that this is the archetypical example of generative AI (GAI) hallucination, underscoring the importance of lawyers becoming familiar with the strengths and limitations of GAI tools. What exactly are lawyers’ ethical duties as they enter a new, uncharted technological landscape and use GAI in their practice? What ethical risks and benefits should lawyers be aware of when using GAI?

Technology is evolving rapidly, and large language models (LLMs) are among the latest innovations available to lawyers to enhance client service. In 2009, the ABA convened the Commission on Ethics 20/20 to examine the impact of technology and confidentiality issues on the legal profession. The Commission reviewed the Model Rules of Professional Conduct (MRPC) in light of emerging technologies and the globalization of the profession and proposed a comment to the rule on the duty of competence. As a result, in 2012, the ABA House of Delegates adopted the Commission’s recommendations. Particularly, it proposed a comment to Rule 1.1., stating  that “a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education, and comply with all continuing legal education requirements to which the lawyer is subject.” While this duty of technological competence offers some guidance on how lawyers should approach new technology in their practice, it remains vague about how exactly lawyers are expected to meet this obligation in specific cases, GAI being one of them. Even though the Commission, as one member recently suggested, drafted the comment to maintain some vagueness, allowing the lawyer’s toolkit to evolve with technological developments rather than being confined to a rigid definition, the advent of LLMs and GAI demands more precise guidance. What does technological competence mean in the era of LLMs and GAI? Should lawyers possess a certain level of technical expertise regarding the technologies they use, or is it more important to understand the potential impacts these technologies could have on their clients, stakeholders, or society?

To address this issue, the ABA and legal scholars have primarily focused on the effective and responsible use of technology as part of the duty of technological competence. This includes emphasizing the need for lawyers to safeguard privacy, protect metadata in documents, cloud computing, e-discovery, and adhere to cybersecurity standards, as Heidi Frostestad and others have pointed out. Some scholars, like Jon Garon in his article “Ethics 3.0: Attorney Responsibility in the Age of Generative AI,” (2023) take a more comprehensive approach, arguing that understanding the scope of a lawyer’s duty of technological competence requires looking beyond the MRPC. They shift the attention to other regulations, such as the Health Insurance Portability and Accountability Act (HIPAA), which imposes data privacy and security rules on legal services. Still others developed specific practices and documentation requirements to meet the technological competence requirement. For instance, Mark Shope, in his article “Lawyer and Judicial Competency in the Era of Artificial Intelligence: Ethical Requirements for Documenting Datasets and Machine Learning Models,” (2021), suggests that one way to demonstrate a higher level of competence with AI tools and transparency in legal practice is to implement dataset and model disclosure forms. These forms would help ensure that lawyers are adequately informed about the data fed into the GAI employed. Dataset and model disclosure forms document specify the motivation, composition, collection process, cleaning and labeling practices, and uses and maintenance of a specific GAI model.

I propose a different perspective. Instead of viewing technological competence solely through the lens of enacting more regulations and documentation or focusing predominantly on privacy or cybersecurity concerns —all of which are important—, I suggest delineating the scope of this duty by focusing on the ethical risks and benefits associated with the technological tools lawyers use. In my research, I argue that technological competence, particularly in the age of GAI, requires understanding not just any risks but ethical risks. These risks include algorithmic bias, hallucinations, and potentially undermining a lawyer’s ability to develop practical wisdom and sound judgment. Further, a typology of ethical risks might include reputational risks both to the law firm and the client, social risks that may affect other stakeholders or society at large, and environmental risks because AI is not completely immaterial; it requires, as the work of Kate Crawford in Atlas of AI (2021), has a foregrounded: a material infrastructure and environmental resource to build it. AI ethical risk, thus, includes also exploitative practices of lithium and other natural and human resources needed for GAI to operate.

Lawyers should know the ethical risks GAI brings as part of their duty to technological competence. I hypothesize that this obligation to know means being aware of the ethical risks, including the social and environmental dimensions that GAI could trigger as it is used in legal practice. But if lawyers need to know the ethical risks of GAI, a correlative question emerges: what kind of AI ethics should be taught and how in legal profession courses?

GAI ethics is not a monolithic concept; it is a contested field with divergent views on how best to identify and address the ethical dimensions of GAI. In my research, I map out several key approaches to GAI ethics—namely, virtue ethics (e.g., what is it to live a flourishing life in the law), normative ethics (e.g., what do we owe to each other or what principles and norms should guide our actions as lawyers), consequentialism (e.g., what consequences are “good” and should we try to bring about with our behavior), and existentialism (e.g., how should we act authentically and responsibly in the face of uncertainty and without any external criteria that can guide us), among others —to provide more comprehensive guidance for future lawyers. The question of how we ought to approach the ethics of AI, or what Thomas M. Powers and Jean-Gabriel refer to as “the ethics of the ethics of AI,” must be considered to evaluate how to teach and identify the ethical risks and benefits in the legal profession. This approach, I argue, will shed light on the meaning of Comment 8. Moreover, it will identify areas that should be added to legal education to fulfill their technological competence duty.

For instance, in the Aristotelean and virtue ethics strand, philosophers like Nir Eisikovits claimed GAI brings an existential threat to humanity, but not how people typically think about it. GAI will not bring about a large-scale catastrophe. The existential threat comes instead from the fact that GAI will likely change what it means to be human. If one of the essences of being human is to make sound judgments and hone our ability to weigh conflicting demands and make tough choices, GAI will gradually undermine this effort in many personal and professional contexts. Lawyers and professionals in other fields will gradually lose the capacity to make judgments now delegated to GAI. This is an existential loss.

Another ethical approach rooted in duties and normative principles highlights other ethical risks as GAI affects principles like fairness, respect, and trust. In this camp, scholars have argued that GAI might spark distrust because it sometimes leads to unreliable outcomes or its outcomes are indifferent to the interests of the people affected by the algorithm. Therefore, the ethical question here lies in opening participation channels to those affected by GAI to bolster their trust and make them feel heard, respected, and part of the conversation. In this view, AI should involve broader and diverse stakeholders in decisions around what uses algorithms are put to, what data is fed, what criteria are used in the training process to evaluate classifications or predictions, and what methods of recourse are available for raising concerns about and securing genuine, responsive action to potentially unjust methods or outcomes.

Unlike previous technologies, GAI seems to be altering the way we think about ethics itself. Interpretations of Comment 8 should reflect this complexity and conceptual depth. Identifying the ethical risks of GAI, therefore, requires a fresh understanding of what GAI ethics means in legal practice and what lawyers need to know to recognize the ethical risks they face. I hope this line of inquiry will help address two key questions, which I call the “risk diagnosis and assessment” question and the “legal education” question related to Comment 8. The first focuses on identifying which ethical risks lawyers need to be aware of when using GAI, while the second concerns what lawyers should know about GAI ethics before entering practice to mitigate these risks.

On July 29, 2024, the ABA issued formal opinion 512 on “Generative Artificial Intelligence Tools” addressing how technological competence could be fulfilled. Opinion 512 stated that lawyers can’t abdicate their responsibilities by relying on a GAI to perform tasks involving their professional judgment. Although the opinion left a wide margin for complying with this duty or how lawyers can assume their responsibilities, it established two guidelines: what can be called “an epistemological guideline” and “an ethical guideline.” The epistemological guideline states that as part of technological competence duty, lawyers should know  “the evolving nature of GAI.” There is no one way to do this. Lawyers may consult with a third party with technical expertise in GAI, attend continuing legal education programs, or immerse themselves in self-study. Knowing about AI ethics, I believe, should be added to this mix of options to be on the path of becoming technologically competent.

The ethical guideline establishes a broad obligation of having “ethical duties” regarding the use of GAI but without much qualification as to what these duties are. It lists some examples, such as preserving confidentiality, communicating to the client when and how GAI is used in their matters, and adopting supervisory responsibilities regarding others members of their law firm using GAI. Above all, lawyers should recognize the “inherent risks” of adopting GAI, of which “ethical risks” are essential.

Together, the epistemological and ethical guidelines set the ground for interpreting technological competence duty about GAI as including the duty to know the ethical risks of using GAI, which, in turn, includes knowing AI ethics applicable to legal practice. This opens a new line of inquiry about determining what kind of approach to AI ethics is more appropriate for lawyers to learn.

My research interest in this topic originated from different paths. First, while helping to teach courses on moral leadership and moral practice at the Harvard Kennedy School, we distilled moral philosophical traditions into actionable strategies to improve public decision-making. I realized moral philosophy could be valuable not just for policymakers but also for legal professionals as they improve their decisions when using GAI. Second, my experience co-teaching a course on “Digital Justice: Opportunities and Challenges” with Oladeji Tiamiyu at Universidad de los Andes (Colombia) helped me identify the need for lawyers to understand not just the mechanics, but the ethical dimensions of GAI. Finally, reading the Mata v. Avianca case made me aware that some lawyers still lack an understanding of AI’s potential to enhance and impair legal practice. I think clarifying Comment 8 might help the legal practice improve their decision-making as to when, how, and in what sense is GAI a useful tool.


Nicolás Parra-Herrera is a Student Fellow at the Center on the Legal Profession at Harvard Law School; he is an S.J.D., originally from Colombia, researching the intellectual history and philosophy of alternative dispute resolution in the twentieth century in the U.S. More recently, he has been interested in the intersection between dispute resolution, technology, and the legal profession. He is also a Civil Discourse Graduate Fellow at the Safra Center for Ethics, a Byse Fellow, and a former Graduate Fellow at the Program on Negotiation at Harvard Law School. He has been a Visiting Professor at Universidad de los Andes (Bogotá, Colombia), both at the Law School and Management School. Before his studies at HLS, he clerked at the Colombian Constitutional Court and worked at private law firms.