The newest lawyer competence: Quality control
Just as we start building competence profiles for lawyer licensure, here comes Generative AI to change what lawyers do. So let's combine these two trends and develop a critical new lawyer skill.
If you’ve been reading this newsletter for a while, you’ll know that lawyer formation — the education, training, licensing, and ongoing development of lawyers — has become my core professional interest.
I’ve written two detailed reports on lawyer competence and licensing for Canadian legal regulators (in Alberta and British Columbia) in the last couple of years, as well as numerous posts at my Law21 blog. I’m consulting to regulators and law firms on professional development matters, and I’m planning to write an entire book on the subject next year. I am, as they say, all in on this subject.
So you can imagine how stoked I was to learn that the legal regulators of Canada’s four western provinces have formed a Western Canada Competency Profile Task Force, with the specific goal of developing a joint Competence Profile that will identify the competencies required upon entry to legal practice.
The WCCP Task Force created a draft Competence Profile earlier this year, consulted key justice system stakeholders in July and August to collect feedback, and modified its first draft accordingly. Then, late last month, the Task Force launched surveys in each province to gauge the legal profession’s views on the subject. (You can take part in this survey if you’re currently or recently in active practice or in a legal role in Alberta, British Columbia, Manitoba or Saskatchewan — but hurry, because the survey closes tomorrow.)
Developing such a profile, and building a lawyer licensing system around it, was the lead recommendation of my report to the Law Society of B.C. So I was proud to see, in the law society’s announcement, reference to my Licensing and Competence Report as the basis for this project.
As you can probably tell, I think this is a big deal — and not just for Canada. I wrote last month about the sudden surge in lawyer licensure reform in the United States, primarily through new pathways into practice that don’t involve a bar exam. I’m now hearing more talk from American licensure reformers about the need for lawyer competence profiles to support these new pathways. England & Wales broke the new ground here years ago, but it seems like North America has suddenly tuned in to this conversation.
This is all great news. But it comes with a strong dose of irony. Finally, after too many years of “credentials-based licensing,” we’re committing ourselves to build valid, consistent, and defensible lawyer licensure programs, based on carefully established frameworks of core competencies — at the precise moment when those core competencies, which had barely changed for decades, could be poised for transformation.
I submitted my Licensure and Competence Report to the Law Society of B.C. in May 2022. The board of directors approved it in September 2022. Just two months later, OpenAI released ChatGPT-3.5. If I’d written my licensure and competence report after that release, and especially after ChatGPT-4 came out the following March, I might well have reached some different conclusions.
Here’s what I mean. In my report, I contended that lawyer competence, both at the point of licensure and throughout a legal career, can be broken down into four broad categories:
Knowledge of the law
Understanding of professional responsibilities
The skills of a lawyer
The skills of a professional
For the moment, we’ll focus on category #3. These are skills that enable the capacity of a lawyer to perform tasks unique to legal practice (as opposed to #4, “the skills of a professional,” which would include organization, management, and communication skills that any professional should possess). In my report’s “Lawyer Competence Starter Kit,” this is what I listed under “skills of a lawyer”:
Gather relevant facts through interviews and research
Carry out legal research
Conduct due diligence
Draft essential legal documents
Solve problems using legal knowledge and analysis
Help negotiate solutions and resolve disputes
Advocate for a client’s position
Provide legal advice to clients
Use law practice technology
Fulfill the basic business and professional requirements of a private law practice
Outside of 7 and 10, and maybe 1, each of these activities will be carried out in future with some use of, and potentially entirely through, artificial intelligence. I’m reasonably confident (as much as anyone can be about such a mysterious and dynamic technology) that LLMs will become ubiquitous in law practice by the end of this decade. Systems built on or infused with Generative AI will become the primary means by which many legal tasks that now occupy much of lawyers’ billable time are performed.
What will be involved in drafting a legal document in 2026? How will lawyers conduct negotiations in 2027? How will they go about assembling a legal advice memo in 2028? For these and countless similar questions, the best answer we can give today is: “We’re not sure, but AI will probably be involved in some way.” Beyond that, it’s entirely guesswork. But somehow, we need to educate, train, and license lawyers today to perform those tasks when their time comes.
I can’t see a way to do that in practical terms. We simply don’t know how Generative AI is going to be used for legal work even a few years from now. Any standards we began developing today probably couldn’t be implemented for another two years at the soonest, and they would become obsolete long before then. The pace of change in this area is too fast and the direction is too unpredictable.
And even if we did know with certainty what tasks lawyers will be performing and what role AI will play in future, our lawyer training and evaluation institutions are incapable of responding in time. Law schools are utterly allergic to Generative AI and are years away from incorporating it into their curricula, while the NCBE’s “Next Generation” bar exam won’t hit candidates’ desks till 2026, and most other bar admission programs are much less advanced than that. We’re kind of stuck.
Unless … maybe we look at this whole thing differently.
Maybe we don’t educate, train, and evaluate law licensure candidates on their ability to personally deploy these skills or carry out these activities, with or without any given technology. Maybe the core competence that we educate, train, and evaluate in lawyers is the ability to assess the quality and effectiveness of legal products and services and determine whether they’re fit for purpose — regardless of whether they were generated by machines, or people, or both.
Under this approach, you would encourage law students to generate essays, papers, and memos with the use of Generative AI (after first showing them how to instruct properly). But you wouldn’t grade their papers — you’d grade them on their ability to explain why the work product is or isn’t effective, valid, and fit for purpose. That would be a better measure of analytical, evaluative, and critical thinking skills.
You could do the same with the licensure process. Give a candidate the facts of a client matter, and assess them on (a) how effectively and efficiently they can instruct AI to produce an analytical and advisory memo, and (b) how accurately they can then critique and improve the memo the AI produced. This is likely to be how lawyers generate work product for clients in future, so why not judge aspiring lawyers on their performance in this regard?
In this way, we’d be developing in lawyers the ability to determine what good legal work products and services look like, rather than the ability to create that work from scratch. The first capacity feels a lot more like what lawyers will be doing in five or ten years’ time. We can and still should develop and assess other critical knowledge and capacities not affected by AI — advocacy, organization, ethical awareness, etc.
But through this new approach to “the skills of a lawyer,” we could raise the first generation of lawyers familiar with and proficient in quality control.
This is something we’re long overdue to address as a profession. “Law has not undergone a quality movement,” Prof. Dan Linna wrote in 2020. “The legal industry has not fostered a culture of standard work, error detection, peer review, performance measurement, and continuous improvement. Likewise, law does not demand evidence-based, data-driven practice. The legal industry does not rigorously assess the efficacy, quality, and value of legal services.”
The sudden rise of Generative AI, which has a lot of quality-control issues at the moment and probably will for some time yet, makes the need to develop quality assessment skills in lawyers more urgent. But this isn’t an issue for the distant future: Quality control and assessment are almost entirely absent in law firms today.
Look around at your own or at a nearby law firm. There are relatively few standardized quality checks for the work of associates and none at all for partners. Imagine telling a senior lawyer that their work will be evaluated by a fellow partner before it goes out to the client, or that it has to be carried out in a systematic manner based on quality standards and procedural protocols to which all work product must adhere. Good luck with that conversation.
I don’t want to oversimplify a complex topic here — I wrote a few years back about the challenge of understanding what “quality” means in a legal practice context. We have a lot to learn and a lot of catching up to do as a profession.
But at the very least, we need to start thinking about “lawyer skills” not in terms of the generation of individual work product, but in terms of assessing whether the work — no matter how it was produced — is effective, valid, and fit to give the client. That could provide us with a roadmap by which to navigate the increasingly unfamiliar road to competence.
I’m very happy that we’ve entered a new era for defining and evaluating the competence of lawyers. I was already pretty excited that our profession has entered the age of Generative AI. So let’s combine these two developments and bring the legal profession to a level of sophistication and performance it’s never seen before.
Hi Jordan, a little late to the discussion - I strongly agree that quality control is something our profession needs to improve on. My own background is in drafting and reviewing contracts, and in this area I think that lawyers very often use a contract's provenance as a proxy for its quality. Eg people will say this precedent must be good because it came from [insert big firm], this contract can be a template because we used it in [insert big transaction]. Unfortunately the very article by prof Dan Linna that you cited (thanks for citing it, it was a good read!) pointed out that, unsurprisingly, even big firms make elementary mistakes. (I'm looking at pg 14, where an analysis of litigation briefs filed by California's 20 largest law firms shows that almost all had elementary mistakes like misspelling case names or misquoting cases.) I think this "provenance as proxy" quality heuristic extends to hiring too - since it's hard to judge how good a lawyer is, firms tend to hire those who came from reputable law schools or law firms. (I'm leaving aside partners, who at least can be judged by their book of business.)
One commentator I've found very helpful in the area of improving contract quality is Ken Adams (US contract drafting expert, unsure if you've heard of him). I think lawyers currently review contracts in a fairly broad way geared towards preventing foreseeable problems or achieving known goals, i.e. "Is there anything in here that doesn't favour my client?" "How can I structure this transaction to minimise risk to my client?" While this is indeed important, this type of review is transaction-specific and so may take years for associates to learn, and more importantly focuses only on avoiding known risks/achieving known goals. However litigation often arises from a contract ambiguity that neither party had thought about, or which parties might not even have realised was ambiguous. One example Adams gives: a notice "may be delivered by method A or method B". Can the notice be delivered by method C? He suggests a clearer way to word the clause would be "a notice may ONLY be delivered by..." https://www.adamsdrafting.com/an-english-case-involving-the-expectation-of-relevance/ He's published a book and keeps a blog on his recommendations for clear contract language. I've found these very helpful, since his advice is universally applicable to all kinds of contracts and is easy to learn.
Your area of expertise probably differs from Ken's, so unsure how helpful his book/blog are for you personally. Also putting this out there so others can potentially benefit from his materials.
Great article - I recently came across your writing and have been enjoying it. The below provoked a few thoughts that I’d be interested to hear your perspective on:
“Maybe we don’t educate, train, and evaluate law licensure candidates on their ability to personally deploy these skills or carry out these activities, with or without any given technology. Maybe the core competence that we educate, train, and evaluate in lawyers is the ability to assess the quality and effectiveness of legal products and services and determine whether they’re fit for purpose — regardless of whether they were generated by machines, or people, or both.
Under this approach, you would encourage law students to generate essays, papers, and memos with the use of Generative AI (after first showing them how to instruct properly). But you wouldn’t grade their papers — you’d grade them on their ability to explain why the work product is or isn’t effective, valid, and fit for purpose. That would be a better measure of analytical, evaluative, and critical thinking skills.”
I’d argue that the ability to assess the quality of legal products and services is already the core competence of a good lawyer in the current system. As a lawyer gets more senior, they invariably spend less time producing work product and more time guiding, reviewing, and assessing the work product of their juniors. The expertise that enables this is why senior lawyers are able to demand high fees – they deliver value in a way few others can. The conventional wisdom is that this expertise is won through years of learning-by-doing and many, many corrections and lessons from more senior lawyers. While there are certainly many problems with the way law schools teach, I think that it would be a disservice to teach students only to review and not to do. This would be like teaching prospective drivers to spot mistakes in videos of old races, then sending them off to start their careers at the Nürburgring. Instead, I think we should focus on ensuring that students develop a clear understanding of the types of work that humans and AI respectively excel at. Then, when they enter practice, they can fully leverage AI to unlock the time needed to produce value-add work product that actually uses their intelligence and legal knowledge (unlike the typical junior lawyer assignment today). Hopefully, they’ll even get to sleep a bit.
I'm wondering - how can we teach students practical skills when they don’t know what area of law or type of firm they’ll end up working in? Perhaps GenAI can help with more personalized learning exercises & assessments...