The one skill lawyers need to use Generative AI
As artificial assistants become ubiquitous in law firms, lawyers with clear legal minds who can methodically analyze problems and point the AI towards practical solutions will be essential.
I have a theory that the last 20 years of legal technology have inadvertently eroded lawyers’ capacity for rigorous, disciplined thinking.
Here’s an example of what I mean. Early online legal research tools encouraged us to distill the essence of our complex legal questions into short fragments of Boolean keywords, piecemeal clues with which the software produced a pile of cases or laws that we had to rifle through, looking for relevance. The tools have gotten more sophisticated, but the keyword-inquiry habits (since cemented by Google and other search engines) remain. By constantly generalizing and simplifying our concepts, we also shrank our mental vocabulary and dulled the sharper edges of our legal minds.
In a similar fashion, early document automation software merely updated and supercharged the venerable practice of grabbling an established precedent and replacing names and terms as needed. These programs massively increased efficiency, but they also discouraged the habit of fully considering the particular circumstances and objectives that the document was intended to record and enable. Again, the software has greatly improved, but it still encouraged us to think small, to insert and monitor tiny variations in a familiar script.
I’m not really complaining about these changes; the time savings and productivity gains they delivered were off the charts. But I feel like they also softened our thinking, dimmed our clarity, and contributed to a weakness for procedural shortcuts. Lawyers don’t have to think things through as thoroughly as they once did. With more challenging opportunities and demands coming our way, that needs to change. And I think Generative AI will help get us there.
This all occurred to me while watching a livecast of Thomson Reuters’ presentation, a couple of weeks ago, on the integration of Generative AI into a host of TR’s legal practice tools. Bob Ambrogi captured the nature of the key change:
In traditional legal research, the user enters a query and gets a response in the form of a long list of cases, statutes and other resources. It is then up to the user to plod through it all in search of the answer. With AI-Assisted research, the user asks a question and the AI delivers an answer in the form of a narrative. No more plodding through cases. The narrative includes footnotes pointing to the cases or other materials on which it bases its answer.
Generative AI is software, but as Ethan Mollick has repeatedly pointed out, it behaves like a person. This means that when you’re dealing with Generative AI, you don’t have to communicate in fragments or phrases or keywords. You communicate as if you’re speaking with an assistant or a junior. You wouldn’t call an associate into your office and bark “debt restructuring Colorado exceptions fraud” and expect them to know what you’re talking about. You’d explain the situation and describe what you’re looking for, so that you’d get a complete and intelligent answer. Gen AI works the same way.
There’s a lot of talk these days about “prompt engineering,” the art of crafting a query to your Gen AI assistant so as to produce the best results. But I think this overly complicates things. What we really need to do, as lawyers, is rebuild and restore our capacity for what I’d call strategic legal reasoning.
Strategic legal reasoning is the ability to understand legal problems and develop solutions through the disciplined, methodical application of logical, analytical, and critical thinking skills. When we’re really delivering incisive and valuable insights to our clients, operating at the top of our licenses, this is the kind of activity we’re engaged in (and that we usually find the most fulfilling).
Strategic legal reasoning is how lawyers come to understand a problem and devise pathways to a solution. I think there are basically four steps:
See the Big Picture. Start by understanding the ultimate or overarching objective at hand. What are we trying to accomplish here? What’s the desired outcome that the problem is blocking, and why is it desired? Place the problem in its larger business, social, or individual context. This comes from the client, of course.
Examine the Problem. Strive to understand the problem as well as you can. Study it from different angles: What you first glimpse might not be the only or the most important facet. Make a note of the contributing factors that created the problem. Ascertain exactly how the problem is blocking the objective.
Acknowledge Your Parameters. What limitations are you operating within, such as time, personnel, and other resources? There’s the “ideal” solution, and then there’s the solution that will actually work while satisfying these other demands. You can always aim towards the first, but you need to prioritize the second.
Exclude For Relevance. Apply your specifically legal reasoning skills to carve away what’s distracting from or not germane to the primary objective. Like a sculptor eyeing a block of marble and envisioning the statue inside, identify what you don’t need to worry about in order to achieve your outcome.
If you’ve followed these steps (or variations thereon, depending on the particular task at hand), you should understand the problem so well that you can pass the hardest test of all: Explain the whole thing to someone else precisely and concisely, so that they can solve it themselves. This is the ability to instruct, and I think it’s the most critical skill in the use of Generative AI.
Most lawyers don’t instruct others very well. They provide too little information, or they don’t provide enough context, or they fail to identify what they don’t want. If you were ever told by a senior lawyer to do something, and then got yelled at because you didn’t bring back what they wanted, you know exactly what it’s like to be on the receiving end of poor instruction. As a profession, we can’t afford that shortcoming anymore.
If you hope to get the most out of the Generative AI programs that will soon be rolling into law firms and showing up in standard office software, you need to hone your ability to instruct. You do that by following the four steps of strategic legal reasoning above, and then compressing your understanding of the present problem into a detailed yet precise set of directions that ask for a solution, leaving out nothing important and including nothing irrelevant.
The ability to effectively instruct — whether a flesh-and-blood assistant or, more commonly from now on, an artificial one — is the key high-value skill required of lawyers in the AI age. Law schools should teach it in every class, regulators should test it as a core competence for licensure, and law firms should make it one of the centrepieces of their professional development programs.
Because Generative AI behaves like a person, we need to work with it like a person, which means providing the necessary explanation, relevant detail, and precise directions that will allow it to achieve the big-picture goal. We need to know how to instruct. Gen AI doesn’t make that skill irrelevant. I think it makes it essential.