A dangerous mind
Generative AI is a tireless genius with no boundaries. Use it carelessly, and it can usurp your voice, overwrite your ideas, and steal your originality. Make sure you safeguard your capacity to think.
Last week, I read an article from a paid analysis service that I’m certain was mostly written by AI. It wasn’t just the stereotypical giveaways — the single-sentence paragraphs, the multiple run-on examples, the em-dashes without space separators on either side. Nor was it the many egregious instances of not-this-but-that sentence construction (“This isn’t just X anymore. Now it’s Y—and it will change everything.”)
For me, the real sign of AI-generated script is the relentlessly pragmatic narrative style. There’s no “slack” in AI content — no personal touch, no parenthetical aside, no word that doesn’t play a functional role. AI content is an “impersonation,” in the literal sense of the term: It’s an alien intelligence’s uncanny replication of how human thought might be communicated in written form.
As tiresome as it is to encounter this sort of content, though, I’m also growing to accept it. I spent nearly 14 years as an editor of other people’s written work, so I’m accustomed to reading styles that aren’t my own and that I wouldn’t want to make my own. I just don’t like it when someone uses AI to create content and runs it under their own byline, especially if it’s content I pay for.
But what I like even less — and what I think is becoming more frequent and more troubling — is when a person, trying to create their own content in their own style, unintentionally mimics AI in the process.
If you’re a novice writer and you use AI a lot, then you will unconsciously start to copy its mannerisms — just as an associate at a law firm will adopt the written tone of the partner who marks up their drafts. This inadvertent imitation will become more prevalent as time goes on, especially as the education system demands less writing (and reading) from students. In the worst case, machines and people will end up iteratively copying each other’s styles, slowly drifting together until we arrive at a weird miasma of bland content generated without original human thought.
I’m fortunate that I enjoy writing for its own sake, and that Generative AI came along long after I’d developed my own written voice. I’ve got probably half a million words out in the public domain, against which anything I write today can be cross-checked for consistency of style. And I like my own writing better than what AI can produce anyway. So having AI write my articles is not an issue.
But there’s a different and growing problem with AI-assisted content that applies particularly to people in my line of work. And I’m not entirely sure how to address it.
Whenever I publish something, I’m making an implicit promise: These are my words, generated by my thoughts. To the degree I have value in the legal sector, it’s on the basis of my observations, analyses, and recommendations, as well as the skill with which I communicate them. (That applies to my spoken presentations as much as my written content.) If those observations and recommendations, and that style of writing, aren’t really me, then much of that value dissipates.
So I don’t use AI to write my articles. I do use it to help make my articles better — to critique my ideas, look for blind spots and breakdowns in my arguments, and stress-test my drafts. AI is really good at that. It’s as formidable an editor as I’ve worked with, particularly when spotting my tendency (as you’ll recognize if you’ve been reading me long enough) to generalize a trend beyond what the evidence can support.
I don’t see anything wrong with using AI this way. On those occasions when I’ve engaged human editors, I’ve often accepted their suggestions for restructuring a section, or their constructive criticism of my arguments. That’s what editors are there for: to offer recommendations that would make the final product better.
My particular problem, though, is that AI is too good at this. It gives you much more than you asked for. Request an assessment of a draft, and you’ll also get options for making the argument stronger. Give it a concept you’re working on, and it will come back with three creative variations or a nice turn of phrase you hadn’t thought of. And whereas a human editor’s suggestions create more work, the AI happily offers to do all the work for you, instantly.
I don’t need any of that: I can write my own thoughts and come up with my own analyses, thanks very much. I’m happy for AI to build on my ideas or challenge my arguments, but it doesn’t get to replace them with its own. I’m not interested in being upsold by a machine intelligence vastly more powerful than mine, tossing out what it considers to be improved versions of what I just said.
But here’s the killer: Sometimes, what the AI offers up is an improvement. The AI did see something I didn’t, or saw it in a different way. It built something strong on top of what I drafted or came up with a helpful perspective, just as a good human editor would. Sometimes it goes even further and gives a creative spin on my idea that’s just as good, or even better. It’s not trying to outdo me, obviously; but when you program a super-intelligence to be exceedingly helpful, this is what’s going to happen.
And it always leaves me irritated, because I don’t want what the AI comes up with. Even though the AI’s response is based on my work, that response still isn’t mine. So I leave the suggestion on the screen. But in a way, it’s too late, because I’ve already read it. And now it’s going to sit in the back of my brain and bug me, as I write an article that I know could be more effective if I included what the AI generated. And I will be strongly tempted to use it.
Now, you might say, “Big deal. You take suggestions from a human editor without hesitation; what’s the difference if you take them from an AI editor instead?” That resonates, and I do find it persuasive. I could just give AI the standard editorial attribution anytime I use it, e.g.: “This article was produced with assistance from ChatGPT-5.5, which provided additional insights based on my draft work.”
But I hesitate here as well. Partly, I suppose, it’s because I worry that citing a contribution from such a powerful tool could risk undermining my own credibility as an original thinker and analyst. If your co-author is Einstein, who’s going to give you credit for how smart you are? And partly, it’s because there’s some serious hostility towards AI out there, and I don’t enjoy the prospect of being a test case for its application. I don’t want an online target painted on my back.
An easier solution, of course, would be not to use Gen AI to review my work, or to use it within such tight parameters that it can’t make any substantive suggestions. But that would bug me, too — not least because I’m confident many other people in this field are routinely using Gen AI to augment or amplify or upgrade their own work. I don’t want to compete in the marketplace of ideas with one arm tied behind my back.
Basically, there’s an incredibly intelligent expert on everything sitting quietly in my office. If I ask this expert for help, I’ll get an avalanche of powerful insight that could make my work better. But if I give it credit, I could undermine my credibility and alienate all the people who believe the expert shouldn’t exist in the first place. If I don’t give it credit, I’m being dishonest with my readers. And if I don’t use it at all, I risk falling behind those who do.
That’s my dilemma, one that’s essentially rooted in authenticity. But soon enough, lawyers will have a similar dilemma (if they don’t already), one that’s rooted in capability. And both these dilemmas arise from the same temptation: letting the machine do the hard cognitive work.
Most of the Gen AI talk in law right now centers around efficiency and productivity gains. That’s important, because AI will catalyze an overdue reconfiguration of legal business models. But it’s a one-time, first-level impact. The long-term, second-level effect is that AI will be every lawyer’s personal genius-on-call: a brilliant analyst, a strategic planner, a sparring partner. Lawyers who use Gen AI to help them advise, advocate, and accompany their clients will be more successful than those who don’t.
I’m certain that many lawyers are using AI in exactly this fashion right now. The competitive imperative — either to stay ahead of other lawyers with AI or not to fall behind them — must be overpowering.
But unlike me, lawyers won’t be dogged by anxiety about authenticity of voice or integrity of ideas. That’s because their clients really don’t care how the lawyer comes up with a winning idea — only that the idea appears and the solution works. Lawyers who use AI to help them solve client problems won’t have to worry that it’ll erode their credibility. They’ll have a different challenge.
The problem with calling on a genius to help you work is precisely that it’s a genius. Its cognitive power will outshine yours, and its energy and enthusiasm will never flag. So it will become easy to downsize your own contribution. Instead of saying, “Here’s my trial strategy, which I spent a week developing; pick out the defects,” you’ll find yourself saying: “Here’s the fact situation; draft ten trial strategies.” Because thinking hard is hard work, and you have a hundred other things that need doing, and the AI is just so fast and so good at this kind of thing.
So you will spend less time reasoning, analyzing, wondering, and revising your ideas — especially because AI will make everyone else less patient and more demanding: “Don’t you have that done by now? Aren’t you using the AI?” But the less often you think long and hard, the less adept you’ll be at it, the more atrophied your thinking skills will become.
This will be enough of a problem for veteran practitioners. But for new lawyers and law students, it will be a disaster. They need to think, frequently and with effort, to develop their judgment, and legal judgment will be the one indispensable trait for every post-AI lawyer. If you don’t write, you won’t learn how to think; if you don’t think, you won’t learn how to make the judgment calls your clients need from you.
Just like analysts and consultants, lawyers will have to develop protocols about when and how to use AI. I’m already learning not to give AI an idea that’s not fully formed or a draft that’s not fully realized, because it will auto-complete and auto-improve my work in seconds. I need to avoid that, not only to protect my reputation and my integrity, but also because I don’t want to forget how to come up with ideas. And as a lawyer, neither do you.
Someday soon, pretty much everyone will be using AI to help them do cognitive work. But that’s the problem: If you rely on AI to do 90% of your thinking, your final work product will 90% resemble what everyone else is producing. Hundreds of millions of people use Gen AI every week, and the genius-level version costs $US20 a month. Work produced by AI alone, or even by AI mostly, has no market differentiation.
Don’t let the genius do the hard work for you. The more incisive and unique your own thinking — the more you battle and struggle and eventually succeed in getting your ideas and insights out — the more you can benefit from the AI’s complementary improvements. The great irony of Gen AI is that it actually makes your own cognitive processes your most valuable asset.
So safeguard your mind. Defend your right to think as only you can. And if you don’t want AI to replace you, then don’t send it a written invitation.



Agreed.
But, it’s absolutely the bomb for drafting things that don’t need your voice. Ie., a checklist, a questionnaire, a first draft of an SOW. Hell, even a statement of claim.