It is the night before the hearing. You pull up the heads of argument you finalised a few days ago, the ones you ran through the AI drafting tool because the timeline was tight. You read it now, and something is wrong.

The argument is technically correct. But it will not move the bench. It informs. It does not persuade.

You sit with it. The authorities are all there. The structure is clean. And yet the document has no feel for the bench. It does not know that this particular judge reads commercial contracts through a constitutional lens that has not yet fully crystallised in the reported case law. It does not know what is actually keeping your client awake tonight, which is not the legal question. It is the reputational one.

You drafted this. You are reading it the way the judge will, and that is the problem.

The thing the document is missing is not a fact. It is not a citation. It is everything you have spent years learning to read in a courtroom. And you sent it without noticing.

The Tool Is Not the Problem

AI tools are genuinely useful in legal practice. The speed with which a well-prompted model synthesises research, drafts a clause, or maps a regulatory framework against a set of facts is not negligible. Practitioners who use these tools well move faster through the volume work and free their attention for the work that actually requires them.

The risk is not that the tool replaces your judgment. The risk is quieter than that. It is that if you stop exercising a cognitive function, it begins to attenuate. Like a language you once spoke fluently but stopped using. The capacity does not vanish overnight. It thins. You notice it only when you need it.

Understanding what AI cannot do is not a philosophical exercise. For a practitioner who intends to lead at the highest level of the profession for the next twenty years, it is a performance question.

The risk is not that AI replaces your judgment. The risk is quieter: if you stop exercising a cognitive function, it begins to attenuate.

Sonja Cilliers & Maryke Swarts, PMRI

What the Machine Is Actually Doing

When a generative AI tool produces a legal document, it is doing something genuinely impressive and genuinely limited at the same time. It is identifying patterns in the data it was trained on and generating output that conforms to those patterns. It is extraordinarily fast at this. It does not get tired. It does not get distracted by the three other matters that are also urgent today.

What it cannot do is reason from genuine novelty. It cannot hold two competing values in tension and sit with the discomfort of not yet knowing which one should yield. It cannot draw on fifteen years of watching how a particular type of client behaves under pressure in the third month of a difficult matter. It cannot feel the difference between a client who says they are comfortable with the risk and a client who actually is.

Legal practice, at its most demanding, is not a pattern-matching exercise. It is a judgment exercise. The distinction matters because the brain functions required for each are different, and only one of them is at risk of atrophy in an AI-augmented practice.

Key Principle

Legal practice at its most demanding is a judgment exercise, not a pattern-matching exercise. AI excels at the latter. The prefrontal integrative functions that produce genuine legal judgment have no equivalent in any current model.

The Moment Before You Give the Advice

Think about the last time you gave advice that genuinely mattered. Not the routine opinion. The advice someone was waiting for in a way that had weight to it.

You probably did not know immediately what the right answer was. You held the question. You considered what the client had told you formally and what they had communicated informally, in the way they had paused before answering, in what they had not said. You thought about the applicable law and about where it was genuinely settled and where it was not. You thought about consequences that existed beyond the legal framework, in the commercial relationship, in the regulatory relationship, in the industry. You made a judgment call that integrated all of that.

No model trained on legal data produces that. It produces an answer shaped by the pattern of answers that exist in its training set. When the situation is genuinely novel, when the values genuinely conflict, when the right answer requires weighing things that cannot be reduced to a formula, the model generates confident-sounding output that may have no relationship to the specific situation in front of you.

The practitioner who has stopped doing the hard cognitive work themselves, who accepts AI output without interrogating it, who no longer sits with the difficult questions long enough to develop their own judgment about them, is the practitioner who will not notice when the model is wrong.

Authentic Leadership in the Legal Profession — Live Webinar

A two-hour session examining what genuine leadership in legal practice requires, and the specific cognitive disciplines that produce it. Available to individual practitioners. Recording included.

Register NowView Leadership WorkshopNo obligation. Email, WhatsApp, or schedule a time.

The Thing Clients Are Actually Paying For

There is a moment in a client relationship that every experienced practitioner recognises. The legal question has been answered. The opinion is technically complete. And the client is still in your office, or still on the call, because what they actually needed from this conversation has not yet happened.

They needed to know that someone who understands the law also understands their situation. Not the facts as formally presented, but the situation as it is actually being experienced by the person in the room. The exposure they have not named because they are not sure it is legally relevant. The outcome they need that is different from the outcome they have asked for.

Reading that requires presence. It requires the capacity to process relational information simultaneously with legal information and integrate both into advice the client can actually use. An AI tool can generate language that sounds attentive. It cannot sit across from a client and adjust in real time to what is being communicated beneath the surface of the formal instruction.

The premium on this capacity is not decreasing as AI becomes more capable. It is increasing. Because as the technical work becomes faster and cheaper, the human judgment, the ethical weight, the relational attunement, becomes the differentiator.

What Deliberate Practice Actually Requires

The cognitive functions that produce this kind of legal judgment are not maintained passively. They are maintained through specific, deliberate practice, and that practice looks different from what most practitioners currently do.

The first discipline is interrogation. Every time you accept AI-generated output, you are making a choice about whether to exercise your own reasoning or to defer it. Practitioners who maintain their judgment develop the habit of interrogating AI output not just for factual accuracy but for what it cannot know: what it has missed about this specific client, this specific bench, this specific commercial context. The act of identifying that gap is where legal judgment lives.

The second discipline is reflection. The most significant cognitive development in legal practice does not happen during matters. It happens after them. The practitioner who takes twenty minutes at the end of a significant piece of work to examine where the judgment calls were made, which of them were sound, and which were driven by pressure or assumption rather than genuine analysis, is building something that accumulates over a career. Post-matter reflection is the mechanism through which experience converts into judgment rather than simply into familiarity.

The third discipline is protection. Complex legal reasoning does not develop in fragmented conditions. Protecting even one substantial block of uninterrupted work each day is not a productivity strategy. It is a cognitive development strategy with consequences that compound over years.

The fourth discipline is discomfort tolerance. AI tools are appealing in part because they remove the discomfort of not yet knowing the answer. But it is precisely that pressure, the experience of holding a difficult question without resolution, that builds the capacity for genuinely complex legal judgment. The practitioner who reaches for the tool before developing their own position is trading short-term relief for long-term attenuation of the function they most need.

None of this requires more time than most practitioners believe they have. It requires different decisions about how existing time is used.

The Leadership Connection

The cognitive capacities this article describes are not only relevant to how practitioners use AI. They are the foundation of effective legal leadership in every context. How an advocate leads a case team. How a corporate legal function builds trust with a board. How a senior practitioner sets the standard for every person working below them.

The lawyers who will define this profession in 2026 and beyond will not be those who use AI least. They will be those who use it well enough to protect the space where their own judgment lives, and who are deliberate enough about their cognitive development to ensure that judgment remains genuinely theirs.

That is not a technology question. It is a professional performance question. And it is one the legal profession is only beginning to take seriously.

What AI Cannot Do in Legal Practice – Frequently Asked Questions

What cognitive skills can AI not replicate in legal practice?

AI cannot replicate ethical intuition, genuine ambiguity tolerance, relational attunement, or the integrative judgment that draws on years of watching how a particular type of client behaves under pressure. These functions require the prefrontal cortex to integrate information from multiple neural systems simultaneously, a biological process that current AI models do not perform.

How does over-reliance on AI affect legal judgment over time?

Cognitive functions that are not exercised begin to attenuate. A practitioner who consistently accepts AI-generated output without interrogating it gradually loses the precision of the very skills that produce high-quality legal work. The process is slow and not visible until the capacity is needed and is no longer fully available.

What is deliberate practice for legal professionals using AI tools?

Deliberate practice involves four disciplines: interrogating AI output for what it cannot know about the specific situation; reflecting after significant matters on where judgment calls were made; protecting uninterrupted blocks of time for complex reasoning; and tolerating the discomfort of holding a difficult question before reaching for a tool that resolves it faster than your own thinking would.

What is the difference between AI pattern-matching and legal judgment?

AI identifies patterns in training data and generates outputs that conform to those patterns. Legal judgment requires reasoning from genuine novelty, holding competing values in tension without a predefined hierarchy, and drawing on embodied professional experience that no training dataset contains. Only one of these is at risk of atrophy in an AI-augmented practice.

How can legal practitioners maintain their judgment when using AI drafting tools?

The key is treating AI output as a starting point for reasoning, not a conclusion. Practitioners who maintain their judgment develop the habit of asking what the model cannot know about this specific situation, and what their own experience tells them that the output does not reflect. This interrogation is not a quality check. It is a cognitive exercise that keeps judgment active.

SC
MS

Sonja Cilliers & Maryke Swarts

Co-founders  ·  Professional Mind Resilience Institute

Sonja Cilliers is an Advocate of the High Court of South Africa with 27 years of litigation experience. Maryke Swarts is a Master Transformation Coach, Neuro-Coach, and NLP Practitioner with an Honours degree in Psychology. Together they deliver neuroscience-based cognitive performance training for legal professionals across South Africa. PMRI holds a monthly column in De Rebus and a weekly column in LexisNexis Current Awareness+.

References
  1. Newport C, A World Without Email (Penguin 2021).
  2. Ericsson A and Pool R, Peak: Secrets from the New Science of Expertise (Houghton Mifflin 2016).
  3. Damasio A, Descartes’ Error: Emotion, Reason and the Human Brain (Putnam 1994).
  4. Kahneman D, Thinking, Fast and Slow (Farrar, Straus and Giroux 2011).