Just as we start building competence profiles for lawyer licensure, here comes Generative AI to change what lawyers do. So let's combine these two trends and develop a critical new lawyer skill.
Hi Jordan, a little late to the discussion - I strongly agree that quality control is something our profession needs to improve on. My own background is in drafting and reviewing contracts, and in this area I think that lawyers very often use a contract's provenance as a proxy for its quality. Eg people will say this precedent must be good because it came from [insert big firm], this contract can be a template because we used it in [insert big transaction]. Unfortunately the very article by prof Dan Linna that you cited (thanks for citing it, it was a good read!) pointed out that, unsurprisingly, even big firms make elementary mistakes. (I'm looking at pg 14, where an analysis of litigation briefs filed by California's 20 largest law firms shows that almost all had elementary mistakes like misspelling case names or misquoting cases.) I think this "provenance as proxy" quality heuristic extends to hiring too - since it's hard to judge how good a lawyer is, firms tend to hire those who came from reputable law schools or law firms. (I'm leaving aside partners, who at least can be judged by their book of business.)
One commentator I've found very helpful in the area of improving contract quality is Ken Adams (US contract drafting expert, unsure if you've heard of him). I think lawyers currently review contracts in a fairly broad way geared towards preventing foreseeable problems or achieving known goals, i.e. "Is there anything in here that doesn't favour my client?" "How can I structure this transaction to minimise risk to my client?" While this is indeed important, this type of review is transaction-specific and so may take years for associates to learn, and more importantly focuses only on avoiding known risks/achieving known goals. However litigation often arises from a contract ambiguity that neither party had thought about, or which parties might not even have realised was ambiguous. One example Adams gives: a notice "may be delivered by method A or method B". Can the notice be delivered by method C? He suggests a clearer way to word the clause would be "a notice may ONLY be delivered by..." https://www.adamsdrafting.com/an-english-case-involving-the-expectation-of-relevance/ He's published a book and keeps a blog on his recommendations for clear contract language. I've found these very helpful, since his advice is universally applicable to all kinds of contracts and is easy to learn.
Your area of expertise probably differs from Ken's, so unsure how helpful his book/blog are for you personally. Also putting this out there so others can potentially benefit from his materials.
Thanks for your great comment, Stella! I agree entirely with the problem of "assuming quality" in a product based on the provenance or brand of the producer (including when the product is a lawyer or new law graduate). I actually wrote about this at some length all the way back in 2009: https://www.law21.ca/2009/06/the-best-and-the-brightest/. Too much of our measurement and assessment in law is done by proxy, rather than through proven and tested standards. That will have to change.
Great article - I recently came across your writing and have been enjoying it. The below provoked a few thoughts that I’d be interested to hear your perspective on:
“Maybe we don’t educate, train, and evaluate law licensure candidates on their ability to personally deploy these skills or carry out these activities, with or without any given technology. Maybe the core competence that we educate, train, and evaluate in lawyers is the ability to assess the quality and effectiveness of legal products and services and determine whether they’re fit for purpose — regardless of whether they were generated by machines, or people, or both.
Under this approach, you would encourage law students to generate essays, papers, and memos with the use of Generative AI (after first showing them how to instruct properly). But you wouldn’t grade their papers — you’d grade them on their ability to explain why the work product is or isn’t effective, valid, and fit for purpose. That would be a better measure of analytical, evaluative, and critical thinking skills.”
I’d argue that the ability to assess the quality of legal products and services is already the core competence of a good lawyer in the current system. As a lawyer gets more senior, they invariably spend less time producing work product and more time guiding, reviewing, and assessing the work product of their juniors. The expertise that enables this is why senior lawyers are able to demand high fees – they deliver value in a way few others can. The conventional wisdom is that this expertise is won through years of learning-by-doing and many, many corrections and lessons from more senior lawyers. While there are certainly many problems with the way law schools teach, I think that it would be a disservice to teach students only to review and not to do. This would be like teaching prospective drivers to spot mistakes in videos of old races, then sending them off to start their careers at the Nürburgring. Instead, I think we should focus on ensuring that students develop a clear understanding of the types of work that humans and AI respectively excel at. Then, when they enter practice, they can fully leverage AI to unlock the time needed to produce value-add work product that actually uses their intelligence and legal knowledge (unlike the typical junior lawyer assignment today). Hopefully, they’ll even get to sleep a bit.
I'm wondering - how can we teach students practical skills when they don’t know what area of law or type of firm they’ll end up working in? Perhaps GenAI can help with more personalized learning exercises & assessments...
Josh, thanks very much for your positive feedback and great questions!
I agree that "the ability to assess the quality of legal products and services is already the core competence of a good lawyer in the current system." The difficulty, from my perspective, is that this ability is normally developed *after* the call to the bar -- in most cases, several years after. I think that's too late. If this is a core skill for being a lawyer, then it should be developed (at least to a point of minimum competence) *before* a law license is issued.
A lawyer should be able to detect unacceptable work (including, if not especially, their own) as soon as they have the right to serve clients. Partly, this is to give the new lawyer confidence that they can tell par work from sub-par work and that they know how to stay on the right side of that divide (self-confidence in their own abilities is an underrated feature of effective lawyers).
But mostly, to my mind, this is to serve clients and protect the public interest. A client has the right to expect competent service from anyone with a law license. That competence extends not just to the ability to perform a service, but to know whether the service is *good enough.*
The conventional wisdom, as you say, is that a lawyer develops this expertise over the course of many years of "learning by doing." The problem is that the "doing" comes at the expense of clients, who are on the receiving end of all the new lawyer's actions that require "corrections and lessons" from senior practitioners (or from other, less friendly sources).
It bothers me greatly that as a profession, we seem content to allow lawyers to "learn by doing" (which means doing things wrong many times until you finally do them right) while working on live problems for real clients. I don't expect new lawyers to enter the profession with the wisdom of Solomon and the expertise of Atticus Finch; but I do think that we should relocate more of the "learning by doing" process to the pre-call stage of lawyer formation, or at the very least, to a "supervised-practice trainee" stage similar to (but more demanding than) articling here in Canada.
I also agree that lawyers should learn how to do legal things, not just how to know whether a thing has been done inadequately. The difficulty is that "how to do legal things" is changing faster than our education and admission systems can react. This isn't just about Generative AI -- it encompasses all manner of new technologies filtering into the profession, as well as new ways of working (including remote and hybrid work and collaborating with other professionals). In the result, we're looking at several annual cohorts of new lawyers who will learn how to do things one way, then be forced to re-learn how to do them in myriad different ways once they enter practice.
I think that's an unfortunate, but also unavoidable, outcome. We live in times of dizzying transition, and law is not the only industry undergoing this kind of transformation. But that's why I think "quality control" is an increasingly necessary skill -- not only to allow lawyers to spot problems with legal output, but more importantly, to give them the wherewithal with which they can start building standardized and quality-controlled *systems* that can produce legal work more accurately and efficiently. (I wrote about this in more detail back in 2019: https://www.law21.ca/2019/09/the-rise-of-the-lawyer/).
Generative AI can be, and ought to be, a transformational force in the law in one particular sense: It should force us to finally abandon the "lawyer expert" model of one-to-one legal service activity and start to rely more on systemic solutions to legal problems. But those systems won't build themselves and they won't monitor themselves. If lawyers don't run the systems, the systems will run us. Knowing how to build and manage the output of a viable, reliable, and effective legal solutions system will, before too long, be a core lawyer competence. That's how it looks to me right now, anyway.
Great points. I actually like the idea of law schools staying more theoretical, because of your point about graduates having to re-learn all the practical skills anyways once they're working, but I definitely agree that it would be great to see systems-based thinking added to the curriculum. Law schools are good at teaching structured, reasoned thinking (the old cliche: teaching students to "think like a lawyer"), so I don't see why they shouldn't be able to add in a few other mental models as well. Imagine how different things could be if every new graduate was familiar with frameworks like Lean Six Sigma or object-oriented programming. As long as they're comfortable with using tech beyond scrolling Instagram or writing an essay (which is, in my experience, the extent of the average digital native's computer literacy), I can't see the status quo being tolerated for long.
Hi Jordan, a little late to the discussion - I strongly agree that quality control is something our profession needs to improve on. My own background is in drafting and reviewing contracts, and in this area I think that lawyers very often use a contract's provenance as a proxy for its quality. Eg people will say this precedent must be good because it came from [insert big firm], this contract can be a template because we used it in [insert big transaction]. Unfortunately the very article by prof Dan Linna that you cited (thanks for citing it, it was a good read!) pointed out that, unsurprisingly, even big firms make elementary mistakes. (I'm looking at pg 14, where an analysis of litigation briefs filed by California's 20 largest law firms shows that almost all had elementary mistakes like misspelling case names or misquoting cases.) I think this "provenance as proxy" quality heuristic extends to hiring too - since it's hard to judge how good a lawyer is, firms tend to hire those who came from reputable law schools or law firms. (I'm leaving aside partners, who at least can be judged by their book of business.)
One commentator I've found very helpful in the area of improving contract quality is Ken Adams (US contract drafting expert, unsure if you've heard of him). I think lawyers currently review contracts in a fairly broad way geared towards preventing foreseeable problems or achieving known goals, i.e. "Is there anything in here that doesn't favour my client?" "How can I structure this transaction to minimise risk to my client?" While this is indeed important, this type of review is transaction-specific and so may take years for associates to learn, and more importantly focuses only on avoiding known risks/achieving known goals. However litigation often arises from a contract ambiguity that neither party had thought about, or which parties might not even have realised was ambiguous. One example Adams gives: a notice "may be delivered by method A or method B". Can the notice be delivered by method C? He suggests a clearer way to word the clause would be "a notice may ONLY be delivered by..." https://www.adamsdrafting.com/an-english-case-involving-the-expectation-of-relevance/ He's published a book and keeps a blog on his recommendations for clear contract language. I've found these very helpful, since his advice is universally applicable to all kinds of contracts and is easy to learn.
Your area of expertise probably differs from Ken's, so unsure how helpful his book/blog are for you personally. Also putting this out there so others can potentially benefit from his materials.
Thanks for your great comment, Stella! I agree entirely with the problem of "assuming quality" in a product based on the provenance or brand of the producer (including when the product is a lawyer or new law graduate). I actually wrote about this at some length all the way back in 2009: https://www.law21.ca/2009/06/the-best-and-the-brightest/. Too much of our measurement and assessment in law is done by proxy, rather than through proven and tested standards. That will have to change.
Great article - I recently came across your writing and have been enjoying it. The below provoked a few thoughts that I’d be interested to hear your perspective on:
“Maybe we don’t educate, train, and evaluate law licensure candidates on their ability to personally deploy these skills or carry out these activities, with or without any given technology. Maybe the core competence that we educate, train, and evaluate in lawyers is the ability to assess the quality and effectiveness of legal products and services and determine whether they’re fit for purpose — regardless of whether they were generated by machines, or people, or both.
Under this approach, you would encourage law students to generate essays, papers, and memos with the use of Generative AI (after first showing them how to instruct properly). But you wouldn’t grade their papers — you’d grade them on their ability to explain why the work product is or isn’t effective, valid, and fit for purpose. That would be a better measure of analytical, evaluative, and critical thinking skills.”
I’d argue that the ability to assess the quality of legal products and services is already the core competence of a good lawyer in the current system. As a lawyer gets more senior, they invariably spend less time producing work product and more time guiding, reviewing, and assessing the work product of their juniors. The expertise that enables this is why senior lawyers are able to demand high fees – they deliver value in a way few others can. The conventional wisdom is that this expertise is won through years of learning-by-doing and many, many corrections and lessons from more senior lawyers. While there are certainly many problems with the way law schools teach, I think that it would be a disservice to teach students only to review and not to do. This would be like teaching prospective drivers to spot mistakes in videos of old races, then sending them off to start their careers at the Nürburgring. Instead, I think we should focus on ensuring that students develop a clear understanding of the types of work that humans and AI respectively excel at. Then, when they enter practice, they can fully leverage AI to unlock the time needed to produce value-add work product that actually uses their intelligence and legal knowledge (unlike the typical junior lawyer assignment today). Hopefully, they’ll even get to sleep a bit.
I'm wondering - how can we teach students practical skills when they don’t know what area of law or type of firm they’ll end up working in? Perhaps GenAI can help with more personalized learning exercises & assessments...
Josh, thanks very much for your positive feedback and great questions!
I agree that "the ability to assess the quality of legal products and services is already the core competence of a good lawyer in the current system." The difficulty, from my perspective, is that this ability is normally developed *after* the call to the bar -- in most cases, several years after. I think that's too late. If this is a core skill for being a lawyer, then it should be developed (at least to a point of minimum competence) *before* a law license is issued.
A lawyer should be able to detect unacceptable work (including, if not especially, their own) as soon as they have the right to serve clients. Partly, this is to give the new lawyer confidence that they can tell par work from sub-par work and that they know how to stay on the right side of that divide (self-confidence in their own abilities is an underrated feature of effective lawyers).
But mostly, to my mind, this is to serve clients and protect the public interest. A client has the right to expect competent service from anyone with a law license. That competence extends not just to the ability to perform a service, but to know whether the service is *good enough.*
The conventional wisdom, as you say, is that a lawyer develops this expertise over the course of many years of "learning by doing." The problem is that the "doing" comes at the expense of clients, who are on the receiving end of all the new lawyer's actions that require "corrections and lessons" from senior practitioners (or from other, less friendly sources).
It bothers me greatly that as a profession, we seem content to allow lawyers to "learn by doing" (which means doing things wrong many times until you finally do them right) while working on live problems for real clients. I don't expect new lawyers to enter the profession with the wisdom of Solomon and the expertise of Atticus Finch; but I do think that we should relocate more of the "learning by doing" process to the pre-call stage of lawyer formation, or at the very least, to a "supervised-practice trainee" stage similar to (but more demanding than) articling here in Canada.
I also agree that lawyers should learn how to do legal things, not just how to know whether a thing has been done inadequately. The difficulty is that "how to do legal things" is changing faster than our education and admission systems can react. This isn't just about Generative AI -- it encompasses all manner of new technologies filtering into the profession, as well as new ways of working (including remote and hybrid work and collaborating with other professionals). In the result, we're looking at several annual cohorts of new lawyers who will learn how to do things one way, then be forced to re-learn how to do them in myriad different ways once they enter practice.
I think that's an unfortunate, but also unavoidable, outcome. We live in times of dizzying transition, and law is not the only industry undergoing this kind of transformation. But that's why I think "quality control" is an increasingly necessary skill -- not only to allow lawyers to spot problems with legal output, but more importantly, to give them the wherewithal with which they can start building standardized and quality-controlled *systems* that can produce legal work more accurately and efficiently. (I wrote about this in more detail back in 2019: https://www.law21.ca/2019/09/the-rise-of-the-lawyer/).
Generative AI can be, and ought to be, a transformational force in the law in one particular sense: It should force us to finally abandon the "lawyer expert" model of one-to-one legal service activity and start to rely more on systemic solutions to legal problems. But those systems won't build themselves and they won't monitor themselves. If lawyers don't run the systems, the systems will run us. Knowing how to build and manage the output of a viable, reliable, and effective legal solutions system will, before too long, be a core lawyer competence. That's how it looks to me right now, anyway.
Great points. I actually like the idea of law schools staying more theoretical, because of your point about graduates having to re-learn all the practical skills anyways once they're working, but I definitely agree that it would be great to see systems-based thinking added to the curriculum. Law schools are good at teaching structured, reasoned thinking (the old cliche: teaching students to "think like a lawyer"), so I don't see why they shouldn't be able to add in a few other mental models as well. Imagine how different things could be if every new graduate was familiar with frameworks like Lean Six Sigma or object-oriented programming. As long as they're comfortable with using tech beyond scrolling Instagram or writing an essay (which is, in my experience, the extent of the average digital native's computer literacy), I can't see the status quo being tolerated for long.