Law has a magic wand now
Some people think Large Language Models will transform the practice of law. I think it's bigger than that.
There’s a reasonable chance that Large Language Models (LLMs), led by ChatGPT4 today but who knows what by next weekend, are going to change the world. Like, the actual world, spinning underneath you right now.
Bill Gates ranks AI with the internet and the mobile phone in terms of revolutionary impact. Microsoft’s GPT4-powered Bing is doing things no search engine should be able to do. Reputable scientists are asking questions that normally get you removed from faculty mailing lists, like: Has GPT4 somehow developed theory of mind? Is it exhibiting the first signs of artificial general intelligence?
Answering those kinds of questions is so far above my pay grade I can’t even see it from here. So I’ll settle for a much less challenging query: What will LLMs do to the legal sector? Are we experiencing, as an NYU Law School professor declared yesterday, “the end of legal services as we know it”?
Well, let’s start with what we do know. ChatGPT has demonstrated not just its competence with, but its mastery of the LSAT and the Uniform Bar Examination, scoring in the 88th and 90th percentiles on those tests. “Large language models can meet the standard applied to human lawyers in nearly all jurisdictions in the US,” says Prof. Dan Katz and others, “by tackling complex tasks requiring deep legal knowledge, reading comprehension, and writing ability.” That’s something.
ChatGPT4 can also do things that only lawyers (used to be able to) do. It can look up and summarize a court decision, analyze and apply sections of copyright law, and generate a statement of claim for breach of contract. (Have I mentioned that ChatGPT4 is ten days old?) Those are just three quick examples I found on Twitter; with yesterday’s announcement that OpenAI is rolling out plugins to integrate ChatGPT with third-party services and allow it to access the internet, those examples could soon number in the thousands.
Will ChatGPT4 and other LLMs replace lawyers? I keep hearing this question and it fascinates me, because I think it really speaks to the legal profession’s insecurities. Doctors and architects and engineers aren’t, for the most part, asking themselves whether GPT4 will replace them, because they’re confident about their other skills and functions that AI can’t replicate. (Yet. I’m not selling this technology short.)
But for most lawyers, our entire professional functionality is rooted in our expertise with knowledge and our fluency with words. We understand the law, we apply the law to facts, and we analyze the results in order to reach an actionable conclusion. We create untold types of documentation and correspondence, with language precisely arranged, deployed, and manipulated to obtain for our clients the results they want.
That’s not all we do. It’s not all we can do. But is sure as hell is the vast majority of what our billable time is spent on. And now someone has gone and invented a Knowledge and Words Machine that does all of those things, in hardly any time at all. Why would we not be alarmed? There’s a reason why “legal services” is #1 on this list of industries most at risk of disruption from generative AI.
Look at the naming conventions of LLMs for another clue. Casetext has released an incredibly powerful program it describes as a “legal assistant that does document review, legal research memos, deposition preparation, and contract analysis in minutes.” It looks awesome.
But this program is not called “AI Assistant”; it’s called “Co-Counsel.” Just like Microsoft’s new GPT4-powered productivity tool for Word, Excel, and PowerPoint is called “Co-Pilot.” These are not the names you give to assistants. They’re the names you give to your colleagues, your partners, and your peers. “Co-” means equal.
Now, let’s be clear. LLMs are not people, they’re not the same as people, and they’re not sentient beings (although they fake sentience alarmingly well). They don’t “think” the way humans think. But we don’t really fully understand how they work (and their creators aren’t interested in sharing the details with us), and they perform their tasks with a speed and apparent ease that defies coherent explanation.
So this seems like a good time to remember Arthur C. Clarke’s Third Law: “Any sufficiently advanced technology is indistinguishable from magic.” Note carefully: Clarke didn’t say the technology was magic — he said it couldn’t be differentiated from it. That’s what ChatGPT4 looks like to the legal sector today. In practical terms, it’s a magic wand for law.
What happens when you introduce a magic wand into the legal market? On the buyer side, you reduce by a staggering degree the volume of tasks that you need to pay lawyers (whether in-house or outside counsel) to perform. It won’t happen overnight: Developing, testing, revising, approving, and installing these sorts of systems in corporations will take time. But once that’s done, the beauty of LLMs like ChatGPT4 is that they are not expert systems. Anyone can use them. Anyone will.
(The PeopleLaw market won’t be forgotten. As the strength of LLMs’ computational capacity intensifies and high-quality datasets of legal knowledge for everyday problems are developed, we’ll also soon see ordinary people logging on to navigate a free, public, online hub of sound answers to legal questions and basic remedies to legal problems, of a type I described last spring.)
What about legal services sellers? Law firms will (and have already begun to) adopt legal LLMs — their clients will expect it, their lawyers will demand it (lawyers love intuitive technology, which is why they don’t like most legal tech), and their competitors will do it if they don’t. But a business that sells a single asset that a magic wand just made obsolete isn’t a business with long-term upside. Or medium-term. Or even next Christmas.
“Lawyer hours worked” is the inventory of law firms, and LLMs are going to massively and permanently reduce that inventory. But “lawyer hours worked” is also integral to how law firms price their offerings, generate their profits, measure their lawyers’ value, decide on promotions to partnership, and establish standards of organizational commitment. It’s a core part of their business identity. There is no way LLMs will leave law firms unscathed.
As I said earlier, ChatGPT4 is ten days old. It’s exceedingly foolish to try drawing any firm conclusions from such scant evidence, and I won’t try. But I can’t shake the feeling that someday, we’ll divide the history of legal services into “Before GPT4” and “After GPT4.” I think it’s that big.
I think law firms will have to fundamentally re-examine what they’re going to sell and what they organize their culture around. And I think that lawyers will need to re-imagine who we are, what we do, and what we’re for, because it shouldn’t be this easy for a machine to become a magic wand when pointed at the legal profession.
Will AI replace lawyers? I absolutely don’t believe so. But if somehow that happens, it will say more about us than it does about the AI.
Excellent read, agree with a lot of your points. We launched www.spellbook.legal last summer and have onboarded nearly 1000 law firms/solos onto the platform. We have very much taken a "lawyer-augmentation" approach like GitHub Copilot has for programmers. We are not so much an assistant or co-pilot, but a precision power tool that enables a lawyer to do bespoke tasks much more effectively. The feedback from our users has been incredible.
One trend we've noticed is that small firms are much, much more receptive to this. Many of them have been billing on a flat fee basis for years at this point, and they need all the help they can get to improve their margins. It is BigLaw hourly billing that is at stake, but literally 1000's of small firms/solos are adopting this technology en masse. This tech also makes it more viable to operate as a solo, since you would no longer need so many support staff.
Looking forward to reading more of your takes!
If A.I. replaces lawyers I'm not sure anyone other than lawyers will care (which I say lovingly to my fellow lawyers because it's just true, even if we don't like to be believe that). 😏
Critical thinking will not be replaced by A.I. and most of what good lawyers do relies on this hard-to-develop skill. Here are the 3 levels of modern knowledge work in a nutshell:
1. Information & Data processing (low-level, automatable, increasingly harder to monetize, except at scale)
2. Pattern Recognition & Synthesizing of Information (mid-level for now, and also getting harder to monetize, especially now that GPT is rapidly developing)
3. Complex problem solving based on critical thinking (high-level and will be for a long time; lucrative and largely immune from large scale disruption)