The new legal intelligence
We’ve built machines that can reason like lawyers. Artificial legal intelligence is becoming scalable, portable and accessible in ways lawyers are not. We need to think hard about the implications.
Much of the legal tech world is still talking about Clio CEO Jack Newton’s keynote at last week’s ClioCon, where he announced two major new features: the “Intelligent Legal Work Platform,” which combines legal research, drafting and workflow into a single legal workspace; and “Clio for Enterprise,” a suite of legal work offerings aimed at BigLaw.
Both these features build on Clio’s out-of-nowhere $1B acquisition of vLex (and its legally grounded LLM Vincent) back in June. On the day the vLex news broke, I posted the following note on LinkedIn:
This deal is going to change the legal sector, for a whole bunch of reasons, but primarily this: A world-class stockpile of primary-source law and a powerful legal AI engine (Vincent) are now in the hands of a company that powers law firms, not a legal information or publishing company.
Lexis and Westlaw look at legal AI primarily as knowledge and technology, because this is what those companies know. Clio is going to look at legal AI as a practical means to generate legal outcomes for clients, because that’s what they know. It’s a completely different take on the use of Generative AI in law, and it opens up previously unexplored possibilities.
As I anticipated, Clio is integrating AI into both the “business of law” (its traditional strength) and the “practice of law,” fusing these previously separate parts of the legal enterprise into, in Jack Newton’s words, “a single, context-aware platform where one AI understands how the pieces fit together.” Clio’s vision, it seems to me, is to make AI the unifying infrastructure of all aspects of the law firm.
The ClioCon announcements capped off a remarkable week of news in the legal AI space:
The newest Vals Legal AI Report, benchmarking several Gen AI tools (both general and law-specific) against human lawyers, found these tools (including ChatGPT) outperformed lawyers on well-scoped, single-jurisdiction legal research questions in terms of accuracy, authoritativeness, and appropriateness of results.
Based on a number of separate reports, it’s clear that corporate clients are now using AI in their own legal work and are pushing their outside counsel to adopt these tools, transparently disclose how they’ve employed AI, and show how they’ve reduced their bills as a result.
Ordinary people are using ChatGPT for legal matters, including one who overturned an eviction notice and avoided $73,000 in penalties and overdue rent, and another who renegotiated a debt and saved $2,000. There are certainly countless more unreported examples (although Gen AI is no access panacea).
What all those accounts suggest, and what Clio’s announcements confirmed, is that Large Language Models are now capable of performing legal tasks across a wide range of activities, sometimes with little or no human oversight and even, under certain conditions — e.g., well-scoped, lower-stakes tasks that don’t demand human oversight — better than human lawyers can manage.
Obviously this is an extraordinary development; but I think we haven’t fully absorbed just how extraordinary. We’re still talking about hallucinated cases and the number of Rs in “strawberry” and whether AI will replace lawyers or kill the billable hour. What we should be talking about is this:
A new source of legal intelligence has entered the legal sector.
I want to avoid any definitional confusion here. In the course of his ClioCon remarks, Jack referred to “legal intelligence” as the value-added difference between Clio’s new Work Platform, which is grounded in reliable legal sources like caselaw and statutes, and the more hallucination-prone general-purpose LLMs like ChatGPT.
But by coincidence, that phrase was already in the working title of this article before the Clio news broke, and I had a different definition in mind: The capacity to learn, understand, and reason with legal rules and principles, and to use that ability to assess, analyze, and solve legal problems.
“Legal intelligence,” by this definition, is a capability — a power to do certain things and achieve certain outcomes. If you’re a lawyer, you acquired the “legal intelligence” capability in law school, using your human intellect. You probably refer to it as your ability to “think like a lawyer,” and that’s accurate enough for present purposes (although see my reflections on “legal intuition” earlier this month).
Now, “thinking” is an act most people consider to be exclusively human (or at the very least, exclusively biological). Artificial intelligence programs don’t “think” — they generate output in response to external prompts using statistical modelling and probabilistic prediction based on their data and training. So it would not be correct to say that LLMs can “think like a lawyer.”
But LLMs do possess the capability of “legal intelligence” — in functional terms, they can learn, understand, and reason with legal rules and principles, and they can use that ability to assess, analyze, and solve legal problems. This is particularly true of law-specific programs like Vincent, CoCounsel, Harvey and others; but as the reports above confirm, it can also be said of general-purpose LLMs like ChatGPT, Claude, and Gemini in many cases. These programs can generate legal output strikingly similar to the content lawyers produce using human legal intelligence.
Legal intelligence, once confined uniquely to lawyers, is now available from machines. That’s going to transform the legal sector.
Think of it this way: Picture your town or city from above, as if you’re looking down at a Google Maps display. Drop a red pushpin on the map for each lawyer below you, each human source of legal intelligence. Lots of little red pins would pop up, or maybe only a few, if (like many people) you’re in a region that’s short on lawyers.
Now drop a blue pin for every desktop, tablet, and smartphone that can run an LLM, either a general-purpose model like ChatGPT or a legally grounded model like Vincent. The entire map would flood blue with countless sources of artificial legal intelligence, vastly outnumbering the human sources in red.
Even if you were to count only the law-specific AIs, which conservatively are in regular use by a hundred thousand lawyers every month, the blue pins would at least rival the red — and they’re growing fast. And each blue pin on the map represents a standalone source of artificial legal intelligence, one that can mimic the results of a human lawyer’s legal reasoning much faster, much less expensively, 24/7/365.
That’s what’s happening to the legal market right now: the supply of legal intelligence is exploding. A valuable capability that used to be scarce is on its way to becoming ubiquitous. It’s not just that we’ve developed artificial legal intelligence, it’s that this capacity is scalable, portable, and accessible — three things that most human lawyers are not.
This is why I keep saying that lawyers’ best hope for future relevance is to focus on their human attributes — their capacity to form trusted relationships, to offer legally-informed but personally-inflected counsel, judgment, and wisdom. Legal intelligence is not going to be exclusive to humans. But intuition, integrity, and the ability to form trusted relationships will be. These human attributes are uniquely our own, and they’re impervious to AI mimicry. That’s where we need to start.
There’s one more consideration here, however, one that I raised in the other part of my Linked comment in June:
Unlike Lexis and Westlaw, Clio has no built-in motivation to keep primary source case law behind expensive lock and key. Legal information is Lexis and Westlaw’s moat; it’s not Clio’s. That should be good for general accessibility to the law and very much not good for Lexis and Westlaw.
Now, I have no idea what any of these companies have in mind for their next trick. But it’s notable that pretty much every advanced legal AI offering in the market today is geared and sold to the legal profession, as tools and ecosystems for more efficient legal workflow and more effective legal businesses.
But suppose that some entity, in possession of deep and authoritative legal data, were to build an LLM that made lawyer-quality legal intelligence available to everyone. Suppose that anybody with a smartphone could access the ability to “think like a lawyer” without having to consult one. Suppose that map, flooded with blue ChatGPT and Claude and Gemini pins, became a map covered with blue Vincents and Harveys and CoCounsels instead.
Scalable, portable, accessible, authoritative legal intelligence, in the palm of your hand, 24/7/365. Bundled with your smartphone plan, or included with Amazon Prime. What does that world look like to you?



Incredibly interesting times. Thanks for sharing your thoughts Jordan.
This was a really good article. Thank you!