Mid-December, early in the morning, precaffeine, The Atlantic’s daily email gave Julia McKenzie Munemo a jolt. “ChatGPT will end high school English,” the email declared. Munemo, the director of Williams College’s Writing Center, clicked on one of the featured links, “The End of High School English,” by Daniel Herman, posted on December 9, 2022. She read the article, eyes wide. She immediately emailed it to colleagues at her top-flight liberal arts college where writing-intensive classes are required. “What is going on?” she asked. Before waiting for a reaction, she texted the link to her sister, also a writing center director, with two words: “We’re toast.”
As a writer and writing teacher, Munemo thought that her reaction would be typical. “I did not see this coming. That sounds naive now, but I didn’t see this coming,” she says. It was the first she had heard of ChatGPT, which was publicly released on November 30, 2022. Just a few weeks later, the AI tool (which stands for “generative pre-trained transformer”) was seemingly everywhere—from national outlets to trade publications. And everyone, from technologists to doctors to artists, was talking about it. Indeed, news of ChatGPT and how it would change knowledge work and creative professions had even infiltrated one of the more conservative jobs: lawyers. For Munemo, though, it felt deeply personal; it was about her life’s work.
News of ChatGPT and how it would change knowledge work and creative professions had even infiltrated one of the more conservative jobs: lawyers.
Munemo was aware that Williams would have to think carefully about how to deal with ChatGPT, both in terms of rules (e.g., around plagiarism) as well as (and on a much deeper level) around the craft of writing and all that that entails. She probed her colleagues from different disciplines to find out how they were thinking about the tool. She sent some information about ChatGPT, with a link to try it out, to the members of her faculty advisory committee—three professors each representing one of Williams College’s academic divisions. “I don’t want you to feel like this is a homework assignment,” she told them, “but I also want you to know this is happening.” One of those professors engaged extensively with the tool. “I gave it some of my hardest prompts,” she recalls him saying, “and the thing it spit back to me was surface level but solidly in the B range.” Later, at a Christmas party, an acquaintance mentioned to her, “I gave it some of my prompts from college, and I would’ve gotten an A on the paper it gave me.” Recollecting the incident, and her discussions, Munemo says, “I began to understand why someone might think what ChatGPT spits out is good. It looks good, right? It’s clean. It has transitions. It introduces and concludes. It checks all the boxes.” But, she continues, “it doesn’t go into any depth.” To her, there was a lack of humanity behind the words—something she felt viscerally.
Once Munemo began to recover from her initial shock, she needed to think carefully about what the technology really meant for her and her craft, for her students and college, and for society more broadly. To do that, she says, she needed to write. It was a process she had done (and taught) thousands of times before. A process that produced outcomes. What resulted was a piece for Inside Higher Ed, “A Message to Students About ‘The Bot,’” in which the essay itself proved the point she wanted to make. She said she didn’t quite know what to make of ChatGPT when she sat down to write, but as words flowed, ideas came. It was only as she put the finishing touches on the piece that she fully understood her initial reaction to ChatGPT. She wrote in the article:
We’ve come to see the goal of writing as getting to our point quickly, making a strong argument and concluding carefully, all with perfect grammar and syntax. But anyone who has revised a paper, come back to an idea after a sleep or a walk or a shower, or worked with a tutor to brainstorm new directions will tell you that the true goal of writing is to clarify, understand and experience our own thinking.
For Munemo, the process of writing—whether it’s a business memo, a philosophical essay, or a simple Post-it of daily to-dos—helps humans wrestle with and make sense of their ideas. “I think of the simplest, most boring thing: writing an email to ask the student tutors at the writing center to come to a training,” she says. “I write it and then by the end of it—I’ve typed my name, I’ve signed off—I understand what I am actually asking of them. It’s often different from what I went in thinking I was asking of them.” She reframes. The whole process takes 10 minutes. “It isn’t a hard process, but even through writing a short email, I’ve figured something out,” she says. If, as Jonathan Malesic said, “learning to write trains your imagination to construct the person who will read your words,” an email not only helps you clarify your own ideas; it is a social emotional learning tool that teaches empathy.
Many point to the ways that generative AI chatbots might help writers: by either copyediting or helping organize outlines or perhaps even lowering barriers to access. It could, as Munemo concedes, give students perfect grammar. To her, however, writing isn’t all about grammar, and she “wish[es] we could accept writers as people who don’t write in perfect robotic sentences.” Indeed, “one eerie consequence of using programs like ChatGPT to generate language is that the text is grammatically perfect,” wrote Naomi S. Baron in the Chicago Sun-Times. But, she says, “it turns out that lack of errors is a sign that AI, not a human, probably wrote the words, since even accomplished writers and editors make mistakes.”
What technologies like ChatGPT won’t give students, Munemo stresses, is the process of selecting specific words, organizing the ideas, and arranging the fragments, and what that brings them: the act of thinking.
Others were coming to similar conclusions about ChatGPT. Jane Rosenzweig, director of Harvard College’s Writing Center, wrote in the Boston Globe that one of the lessons she imparts to students is that “writing … is a way of bringing order to our thinking or of breaking apart that order as we challenge our ideas. We look at the evidence around us. We consider ideas we disagree with. And we try to bring a shape to it all.”
When we’re not thinking critically and we’re not using our brains, we’re not connecting to the people with whom we’re communicating.
Julia McKenzie Munemo, director of Williams College’s Writing Center
Munemo isn’t particularly interested in catching students who use ChatGPT. Williams College has an honor code, and Munemo herself has a clear policy on her website. “Check with your professor,” she tells students. “There may be cases where they say it’s OK to use ChatGPT, but you need to cite it if you use it.” She also knows that college life is busy, stressful, and demanding. She knows students will use the program. But her hope is that they make the choice not to let it work for them. “What I care about is that students are given every opportunity to use and expand their brains—and graduate ready to use their brains more,” she says.
Munemo says that at its most basic and most profound, writing is a communication tool. It’s an opportunity for connection. “When we’re not thinking critically and we’re not using our brains, we’re not connecting to the people that we’re communicating with,” she says. And, as she wrote in her essay, “Writing, rewriting and revision work. The process helps you think.” Even the most powerful processors cannot replicate those processes.
ChatGPT at law school
Jonathan Choi, McKnight Land-Grant Professor at the University of Minnesota School of Law, is less interested in what work ChatGPT will take away from lawyers, and more interested in how it will change the nature of that work. As Ron Dolin discussed in “Legal Informatics,” “Innovation helps do away with those activities that ‘lawyers hate doing and clients hate paying for,’” Dolin says, quoting one innovator he interviewed. Figuring out what and how is the next step.
Choi made headlines in early 2023, when he and three colleagues released the paper “ChatGPT Goes to Law School.” Choi, along with Kristen E. Hickman, Amy Monahan, and Daniel Schwarz, asked ChatGPT to respond to exams composed of both multiple choice and essays for the course Constitutional Law: Federalism and Separation of Powers, Employee Benefits, Taxation, and Torts. They then blindly graded ChatGPT’s exams, which were placed among a group of student papers. Overall, they found that across the four exams, ChatGPT averaged a C+—a passing grade at the University of Minnesota, but one that would have placed a hypothetical student on academic probation. “In writing essays,” the four professors wrote, “ChatGPT displayed a strong grasp of basic legal rules and had consistently solid organization and composition. However, it struggled to identify relevant issues and often only superficially applied rules to facts as compared to real law students.” Furthermore, they found issues with focus (it would draw on different pieces of a law from what was relevant to the course), a lack of detail (identifying the correct rule but not explaining its application), and an inability to respond to open-ended prompts (e.g., in a torts essay, not identifying the theories of negligence raised by the facts), all of which are essential skills for would-be lawyers.
Teaching law students how to make ChatGPT useful means experimenting with prompt engineering.
A similar study from Christian Terwiesch, Heller Professor at the Wharton School of the University of Pennsylvania, resulted in a B or B- on his business school exams. There, he found an impressive grasp at “basic operations management and process analysis questions” but poor performance in basic middle school math. Further, Terwiesch said, ChatGPT did less well on questions that required “advanced process analysis.”
While the present form of ChatGPT is equal parts impressive (it’s at least receiving passing grades!) and problematic (for all the reasons noted above), Choi says significantly that it’s still a vast improvement on its previous iteration—something to note as the technology continues to improve. Choi, whose own work analyzes legal issues using natural language processing, had access to ChatGPT’s predecessor, GPT-3, and was running it through similar tests. “I was a little discouraged,” he says. “I was running this experiment in October, and it wasn’t performing that well.” The new version, ChatGPT, however, exceeded expectations.
Many of Choi’s colleagues thought that ChatGPT would not be able to handle law school exams at all. “In some ways, their beliefs were vindicated because we found that ChatGPT performed at the bottom of the class,” he says. “But the idea that ChatGPT wouldn’t be able to write exam answers at all, that would’ve been true of generative AI a year ago, and now we’ve seen that ChatGPT can achieve a passing grade.”
Alarmist headlines like “ChatGPT May Be Coming for Our Jobs” from Insider or the more specific “Will ChatGPT Make Lawyers Obsolete? (Hint: Be Afraid)” from Reuters have made lawyers take notice. While GPT-3 wasn’t widely available or intuitive to the public, ChatGPT has galvanized the collective imagination. Still, Choi isn’t worried about AI replacing lawyers. He says:
The core concern is that even though we found in our study that ChatGPT was generally reliable in stating legal doctrines, even if it’s 90 percent reliable, that’s probably not good enough to replace human lawyers. In most contexts, you would want your human lawyer to be 95 percent, 99 percent, almost 100 percent reliable. Now, there are some contexts where that’s not the case. Certain kinds of legal practice might be comfortable with a lower level of accuracy, but the kinds of things that Harvard Law School is training students for are not going to be replaced by computers anytime soon.
Like due diligence or doc review, AI tools like ChatGPT may simply supplement and bolster creative human intelligence. ChatGPT, for instance, might be used to help law students or first-year associates with first drafts. With elaboration and additional reasoning, “this type of collaboration between a human and ChatGPT would almost certainly produce better results than using ChatGPT alone,” the Minnesota law professors wrote in their paper. It might change the way legal jobs are conducted, says Choi, but it will hopefully eliminate only the kind of work “nobody really likes doing in the first place,” he says. “The kind of work that will remain will require deep legal reasoning and the application of legal judgment.”
If lawyers start using these tools in their own practice, because we’re a professional school, it would be incumbent on us to adapt the ways that we teach, to reflect the ways that our students will practice in real life.
Jon Choi, McKnight Land-Grant Professor, University of Minnesota Law School
The precise contours of how law students—and lawyers—will use tools like ChatGPT is unclear, but that is the point. “There’s so many different uses of this technology, and students are experimenting, dipping their toes into the water, but they don’t have a strong idea of what’s most likely to be useful in their professional careers,” Choi says. “And in a way that’s the role of professors, to be able to identify what would be the most useful and to teach students.”
Prompts for the future
In early January, Kevin Roose wrote in the New York Times that there may be ways to incorporate ChatGPT into classes rather than forbid it outright. ChatGPT could be useful, for instance, to generate outlines or help with revision or brainstorm essay prompts. Still, Roose admits, “I loved school, and it pains me, on some level, to think that instead of sharpening their skills by writing essays about ‘The Sun Also Rises’ or straining to factor a trigonometric expression, today’s students might simply ask an A.I. chatbot to do it for them.” But, he says, generative AI isn’t going away. It’s time to figure out what it means and how we want to work with it. Whether it will bolster creativity or not.
For Choi, that means teaching law students how to make ChatGPT useful by experimenting with prompt engineering. “It turns out that to use ChatGPT for legal writing, you need to give it the appropriate prompts to have it output something that works for your task,” he says. This means thinking about what style, tone, or organization you require; what concepts might be required; and more. It’s also where creativity comes into play. As Atlantic contributor Charlie Warzel put it, “Like writing and coding before it, prompt engineering is an emergent form of thinking. It lies somewhere between conversation and query, between programming and prose. It is the one part of this fast-changing, uncertain future that feels distinctly human.”
Choi and his coauthors offer some guidelines. For instance, they tell users to include instructions on tone, such as “academic,” but place it at the end of the prompt, and that prompts specifying tone were more successful at producing desired results than prompts asking ChatGPT to “adopt a specific identity” like an employment lawyer. Students might also learn how to stop ChatGPT from “hallucinating” sources, as it has been known to do. Choi et al. explain that you can instruct ChatGPT to “refer to relevant court cases. Do not fabricate court cases.” They also help explain how to use ChatGPT to generate longer-length text, follow word limits, and more.
“Depending on how reliable future iterations of ChatGPT are, I think it could revolutionize the way that we teach and the way that we administer exams,” Choi says. “If lawyers start using these tools in their own practice, because we’re a professional school, it would be incumbent on us to adapt the ways that we teach, to reflect the ways that our students will practice in real life.” But, as he and his coauthors said in their article, ChatGPT cannot replace a lawyer’s ability to reason—and a client’s desire for that human reasoning.