In the post-AI legal world, what will lawyers do?
Legally trained generative AI will "free up" lawyers by taking away millions of hours of work. The profession's future depends on what comes after that.
“The best way I can hope to think about it — and I don’t have the answers, no one does — is AI-assisted legal services [will] free up lawyers to do higher-level work for their clients.”
Enrico Shafer, longtime legal innovator and managing partner of Michigan-based law firm Traverse Legal, concluded a recent interview with LegalTech News with this aspiration for the impact of generative AI on the legal profession. It’s a measured and sensible response. The problem, as I see it, is figuring out what that “higher-level work” could possibly be.
The idea that Large Language Models (LLMs) and similar AI will liberate lawyers from low-level tasks, so they can re-dedicate their time to work of greater value, is just the newest iteration of a long-held view among law’s forward thinkers. It’s previously been said about predictive coding, document automation, online legal research, probably email and the telephone if you go back far enough.
But lawyers have heard this promise for years — “This new technology will free up your time to do work of greater value.” Clients have been fed a similar line: “It will reduce the number of hours we spend on your tasks,” etc.
Has that been the case? Do we think lawyers are doing more sophisticated and satisfying work today than they were ten years ago? Are clients spending less for legal services and getting more value? Or have the real efficiency gains delivered by legal technology instead been channeled into shareholder profits, as in every other industry this century, while lawyers continue to grind out billable hours?
I think we should be skeptical that any new technology will be used in ways that benefit human workers. In the ordinary course, technology replaces human labour, and enough technology destroys human jobs and even entire industries. That’s pretty much the entire record of our economy from the cotton gin onwards.
The promised restitution for this destruction, the quid pro quo, is that the technology will also create new opportunities for human activity. Often it does, but not always — and the people who fill those new opportunities aren’t always the same ones whose jobs the technology destroyed. Very few Chrysler assembly-line workers went on to become automotive industry analysts.
The legal profession is about to go through what manufacturing already has. In the next few years, legally trained generative AI will replace lawyer labour on a scale we’ve never seen before. An enormous amount of lawyer activity consists of researching, analyzing, writing, developing arguments, critiquing counter-claims, and drafting responses. A machine has now come along that does most of these things, much faster than we do. Today, the machine needs lawyers to carefully review its efforts. Within two years, I doubt it will.
The entire history of mechanization and automation tells us that hours of lawyer labour are going to become milliseconds of machine activity. That will constitute a massive out-migration and micro-sizing of human effort from the legal profession, one that will leave a lot of lawyers looking for something to do with all their “freed up” time.
The hope we clutch onto is that lawyers will re-deploy into “higher-value” activity. But high-value work in the law is scarce. Law firms fight each other for it, paying huge free-agent fees for rainmakers who can land that work, precisely because there’s so little of it. Lawyers love “bet-the-company” files, but companies don’t bet themselves every day.
It’s all well and good to be liberated from low-value work. But liberated to go where, and do what? We can’t all level up to do “higher-level work” in the law — not unless that work suddenly multiplies, miraculously, like the loaves and the fishes.
But maybe it can.
We shouldn’t make assumptions about how the legal market will or won’t operate from this point onwards. It’s not as if anything else out there in the world faintly resembles how things used to be. Generative AI could upend all our preconceived notions about scarcity and abundance in the legal services sector. That’s one reason to think hard about new opportunities for lawyers.
Here’s another: If we can’t find something to engage lawyers’ skills and talents that’s beyond the reach of Large Language Models, then we’re looking at an extinction-level event for much of the legal profession. That would be bad for lawyers, for the multi-billion-dollar legal market, and for the societies whose democratic norms lawyers are at least nominally in charge of defending.
So how do we figure out what the future realm of lawyer activity will include? The fashionable thing these days is to ask ChatGPT those sorts of questions. I find that trend cutesy and tiresome and the answers flat and derivative. For as long as we still have lawyers to ask, I’d prefer to ask them instead.
So I did ask a bunch of lawyers, last week, in a very unscientific poll on LinkedIn. “I think the impact of legal AI could be so profound that many lawyers will be forced to rewrite their job descriptions,” I said. “So we might as well rewrite them to suit what we like. If you could get paid well for doing anything as a lawyer, what would you choose to do?”
The results, limited as they were, still were interesting. They included:
Direct advocacy: “Arguing in court, telling clients they won”; “Helping people in front of a judge”; No paperwork … no adjournments or wasteful motions”
Direct client advisory: “Daily operational advice”; “Talk to clients to understand their unique needs and provide tailored advice in response”
Direct relationships: “Mentor”; “Mentoring”; “Networking and relationship building”
Plus an “I’m not sure” and a “This will keep a few of us awake at night.”
There’s a distaste for process, paper, and procedure here. More importantly, there’s an encouraging emphasis on people: talking to them, listening to them, advising them, arguing for them, helping them. The old stereotype of lawyers is that we hide away in corners, scanning books and documents looking for rules and loopholes. In reality, I think, we want more time with people, not less. We like being in the world. We like making a difference there.
I find this particularly interesting, because I’ve been struck recently by the sense of isolation in our profession. Not just in terms of working from home and missing out on daily interactions, but a deeper alienation from others and even from ourselves. We are stuck in corners too much, with just a screen and a keyboard and a billable hour tracker. We feel detached from other people. We’re lonely. No wonder we’re suffering a crisis of unhappiness these days.
I don’t know if there are “higher-value” opportunities awaiting lawyers in the post-AI world. I’m not sure what they might be if they do exist. (I’d love to hear your ideas in the Comments below.) And whatever they are, I’m very doubtful they’d fall neatly into a billable-hour pricing system. (More about the impact of all this on law firms in the next edition.)
What I do know, or at least believe strongly, is that the future of lawyer work is personal. We’ll provide value not primarily (maybe not at all) through our knowledge of the law or our ability to perform “lawyer tasks,” but through direct, sincere, and empathetic connections with people. We’ll meet, engage, listen, understand, diagnose, collaborate, discuss, recommend, and confirm with people. We’ll advocate, negotiate, accompany, assess, advise, counsel, mentor, plan, and strategize with people.
In the post-AI world, lawyers will be people first and everything else second. That feels a long way from where we are today. So we’d better get moving.
Nathan Shedroff, a renowned "interaction designer," wrote a paper called "Information Design" in which he described a process by which we understand things.
Shedroff called this process the "continuum of understanding.” And it’s a useful model for better understanding what humans are good at vs what computers excel at.
The process of understanding begins with collecting data that is developed into information that helps us acquire knowledge and culminates in the development of wisdom.
Computers are good at the data & information level. Humans excel at the knowledge (domain specific) and wisdom levels.
This is the "high value" area that lawyers need to focus on. The knowledge level involves soft skills, informed intuition, ability to "read the room," and giving thoughtful advice based on deep knowledge and wisdom.
This is what lawyers should focus on. And it's what they should have been focusing on before ChatGPT was born.
Some good articles to read👇🏻
https://www.interaction-design.org/literature/topics/continuum-of-understanding
https://www.interaction-design.org/literature/article/the-continuum-of-understanding-and-information-visualization
Excellent post! I've been very disappointed to see people say vaguely that LLMs will free people up to do "more important" legal work without ever defining what that means. Also, are law schools preparing students to do these higher level tasks? Based on most law school exams and the bar exam (even the next gen bar), we're still preparing students to solve complicated word/logic problems that computers are on the verge of mastering.
If (as I hope) you are right that the true future of lawyering lies in human interaction, I still think that the overall human workload would have to decrease, at least under our current service model. The real power of computerization should be that we should finally be able to scale up delivery of legal service to more people who actually need it but can't afford it. There are plenty of people out there who are at the mercy of a legal system they don't understand and can't begin to engage with. LLMs are the structure on which such a bridge could be built, but lawyers will have to get together and figure out how to build it sustainably.