There is a junior lawyer at my firm—not a person, but a composite—whose workflow I have started watching with unease.
She is bright. She graduated near the top of her class. She passed the bar on the first attempt. By every measure that the firm uses to evaluate hiring, she is exactly the kind of associate we want.
Her workflow looks like this. A partner sends her a research task. She opens a legal AI tool—one of the wrappers; the specific brand isn’t important. She types the question into the tool. She gets back a structured response, with citations, in about twenty seconds. She copies the response into a Word document, adjusts the formatting to match the firm’s house style, and sends it back to the partner with a brief cover note.
The partner reads it, makes minor edits, and submits it to the client.
Everyone is happy. The work is done. The hours are billed.
I have started to wonder, watching this pattern repeat across many associates over many months, whether the people inside this workflow have any idea what is happening to them. Because what looks like efficient lawyering, from the inside, looks to me—from the outside—like the slow construction of a generation of legal workers who will not be able to do legal work without their tools.
The pattern that produces “AI’s servant”
I want to describe the pattern carefully, because it is subtle and the people inside it are not stupid.
Stage one: the associate uses AI to accelerate research. They still read the underlying cases. They use the AI’s summary to orient themselves, then verify against the source material. The AI is a productivity tool, not a substitute for analysis.
Stage two: the associate, under deadline pressure, starts trusting the AI’s summary more and verifying less. The first time they catch a small error, they’re cautious for a week. Then the deadlines come back. The verification slips.
Stage three: the associate has stopped reading the cases entirely. They read the AI’s summary, paste relevant excerpts into the memo, format it, and submit. They are no longer reading legal authority. They are operating a translation layer between AI output and partner expectations.
Stage four: the associate now experiences active discomfort when asked to do legal work without AI assistance. They have not built the underlying capability that AI was supposed to be augmenting. The AI has not augmented their lawyering. It has replaced it, while leaving them in the seat.
I am watching this pattern play out, in slow motion, across multiple junior associates simultaneously. None of them realize it is happening. From their perspective, they are doing their jobs efficiently. From mine, they are turning into operators of a system they do not understand and cannot defend.
What capability actually looks like, and how it gets lost
The thing that distinguishes a real lawyer from an AI’s servant is hard to describe but easy to recognize when it is missing.
A real lawyer can sit in a conference room with a client and, when asked an unexpected question, produce a useful answer. Not a perfect answer. Not the answer a research database would give. A useful answer—one that draws on a thousand small pieces of accumulated judgment about how courts actually behave, how this client actually operates, how the relevant industry actually functions. The answer is calibrated to the moment. It cannot be Googled. It cannot be prompted.
This capability comes from a specific kind of practice. It comes from reading thousands of cases over years and noticing patterns. It comes from drafting thousands of clauses and watching which get redlined and why. It comes from sitting through dozens of negotiations and observing which arguments land and which don’t. It is built from many small, painful repetitions of doing the work yourself, badly, until the work stops being bad.
AI shortcuts every one of those repetitions.
A junior associate who uses AI to summarize cases instead of reading them is not learning to recognize patterns in case law. They are learning to recognize patterns in AI summaries of case law, which is a different and lower-grade skill. The capability that should have been forming during their first years of practice is not forming. They will not notice this for several years. By the time they do, the gap will be hard to close.
”AI’s servant” is not a metaphor
The phrase I keep coming back to, when I describe this pattern to colleagues, is AI’s servant. I think it is literal.
A servant is someone who executes the will of someone else. The relationship between an associate and the AI tool is starting to look uncomfortably similar. The AI generates output. The associate carries the output to the partner. The associate’s contribution is the carrying—the translation, the formatting, the cover note. The substance comes from the tool.
In a traditional firm structure, the partner generated the substance and the associate carried it to the client. The associate’s job was to learn by carrying. By the time they became partners themselves, they had absorbed the substance through years of carrying.
In the new structure, the AI generates the substance. The associate carries it. The associate is not absorbing anything they will be able to use later. They are practicing the skill of carrying AI output—a skill that does not generalize to anything else, and a skill that, by its nature, does not get harder over time. They will be carrying output at year ten the same way they carried it at year one.
What they will not be able to do, at year ten, is replace the AI when it produces something subtly wrong. They will not have built the calibration that lets a senior lawyer say “this argument doesn’t work, here’s why.” They will be permanent operators of a system they cannot improve and cannot replace.
What I tell young lawyers privately
I cannot say this in firm-wide trainings. The official position has to be “embrace AI, use it responsibly.” Anything more nuanced gets read as resistance, and resistance is not a fundable position in 2026.
But what I tell young lawyers privately, when they ask my honest view, is something like the following.
Use AI for things that are not your craft. Use it to schedule, to summarize internal emails, to draft non-substantive client communications, to translate between languages. Use it to accelerate the production of work product that you have already drafted.
Do not use it to do your reading for you. Read the cases. Read the contracts. Read the regulations. The slowness is the point. Your future capability is being built by the slowness.
Do not use it to write your first drafts. Write your own first drafts, badly. Then ask AI to critique them. The critique you receive will not be very good, but the act of having drafted something forces you to develop a position, which is the part that matters.
When you must use AI for substantive research, ask it the opposing question first. “What is the strongest argument against the position I’m hoping for?” The model is much more useful as a sparring partner than as a confirming oracle.
And, finally: notice when you stop being able to do the work without the tool. That is the moment when you have become its servant rather than its user. Most of your colleagues will not notice this transition. The ones who do, and adjust, will be the senior lawyers of the next generation. The ones who don’t will spend their careers operating wrappers.
The firm-level problem
This is also an institutional problem, and the firms that figure this out first will have an enormous advantage.
The firm of the future does not need fewer associates. It needs associates who are being trained differently—with mandatory unaided practice, with explicit instruction in how AI can degrade rather than improve capability, with senior lawyers actively watching for the pattern I described above. None of this is what most firms are currently doing. Most firms are deploying AI tools, telling associates to use them, and assuming the productivity gains will translate into better lawyers.
They will not. They will translate into faster production of work that the associates do not actually understand.
The firms that recognize this are going to have a recruiting and retention advantage I think people are underestimating. Smart young lawyers will eventually figure out that they are being deskilled. They will leave the firms that deskilled them and join the firms that didn’t. The firms that didn’t will, in 2030 or so, find themselves with a talent depth that the AI-maximalist firms can no longer match.
I would be planning for that, if I ran a law firm. I am not sure many people do.
Part of an ongoing series on AI and the legal profession. If you’re a junior lawyer reading this and recognizing yourself in the pattern—or a partner watching it unfold at your firm—email [email protected]. I read everything.