There is a question I have been turning over for the better part of a year: if AI can produce competent legal analysis at near-zero marginal cost, what exactly are clients paying lawyers for?
The conventional answer is “judgment.” Lawyers, the argument goes, exercise judgment that AI cannot replicate—judgment about strategy, about risk, about the application of law to facts in cases the AI has not seen. This is true, as far as it goes. But it is not, I think, the most important answer.
The most important answer is something the legal profession has been bad at acknowledging, because it sounds soft, because it isn’t billable in the traditional sense, and because lawyers like to think of themselves as technicians of logic rather than providers of emotional containment.
The most important thing lawyers sell is the feeling of being correctly held through a frightening moment by someone who actually knows what they’re doing. Everything else—the analysis, the documents, the strategic advice—is in service of that feeling.
AI cannot produce that feeling. It probably never will. And the lawyers who figure out how to focus on this part of their work, rather than the parts AI is rapidly commoditizing, are the lawyers who will thrive.
What the client is actually anxious about
Start with the client’s experience. A person walks into my office for a serious legal matter—a contract dispute, an inheritance fight, an investigation, a divorce. They are not, in any meaningful sense, seeking information. They have plenty of information. They have read articles online. They have asked friends. They have, increasingly, asked AI.
What they don’t have is a sense of safety. They don’t know what’s going to happen. They don’t know how bad it could get. They don’t know what they should be doing right now versus what they should be worrying about later. They are operating in a state of cognitive overload that is, in many ways, worse than the underlying legal problem.
When a sophisticated lawyer takes on their case, the first thing that changes is not the underlying facts. The facts are what they are. The first thing that changes is the client’s emotional relationship to the situation. They go from “I am alone and overwhelmed and the system is going to grind me up” to “there is someone competent in my corner who has seen this before and knows what to do.” This shift is, I would argue, ninety percent of what the client is paying for.
The actual legal analysis is the visible deliverable. The relief from uncertainty is the actual product.
Why AI cannot deliver this
A model can produce competent legal analysis. It cannot produce the experience of being held. The reasons are structural, not technical, and I do not believe they will be solved by better models.
First, the client knows the model has no stake. If the AI’s analysis turns out to be wrong, the AI is not professionally responsible. It will not have its bar license revoked. It will not be sued for malpractice. The asymmetry of stakes—the lawyer bears real consequences for being wrong, the AI doesn’t—is what makes the lawyer’s reassurance meaningful. Reassurance from a system with no skin in the game is not, in any deep sense, reassurance. It’s just text.
Second, the client knows the model has no memory. When I take a case, I carry the client’s matter in my head, intermittently, for months or years. I think about their problem in the shower. I notice analogous cases when they come up. I have, at any given moment, an evolving internal model of their situation that gets refined by every new fact and every new strategic development. The AI starts fresh every conversation. It is, by design, a stateless answering machine. The client can sense this difference, even if they can’t articulate it. They know they are talking to something that doesn’t actually remember them.
Third, the client knows the model agrees with whoever asked. This is the structural problem I have written about elsewhere. The AI is trained to be helpful, which gets operationalized as agreeable. When the AI tells you the news is good, it might be because the news is good or it might be because you framed the question in a way that produced that answer. The client cannot tell. With a lawyer, the client knows that good news means good news, because the lawyer’s professional incentive is, if anything, to underpromise. A lawyer’s “I think we’re in a strong position” is a different category of statement than a chatbot’s “I think you’re in a strong position.” The client knows this at a gut level, even if they have just been arguing for the chatbot’s version.
The implication for how lawyers should work
If the actual product is the feeling of being correctly held, several things follow.
The first conversation matters more than any other. A first meeting where the lawyer establishes calm, demonstrates that they have understood the situation, and projects a clear sense of “I’ve seen this before, here’s roughly what happens next”—that conversation is worth more than the next twenty hours of work combined. It is the moment where the client’s emotional relationship to the case shifts. Lawyers who treat first meetings as procedural intake are leaving most of their value on the table.
Speed of response matters more than quality of response. A client who has been waiting four days for an email reply has, by day three, lost the sense of being held. The lawyer’s competence has not changed. The client’s perception of safety has collapsed. A lawyer who replies same-day with “I’ve received your email, here is what I’m thinking, I’ll have a full response by Tuesday” is providing more value than a lawyer who provides a perfect response on Thursday. The competent lawyers who lose clients to AI are often the lawyers who treat email like a literary form rather than a connective tissue.
The performative aspects of practice are not waste. A well-organized memo, a confident tone in meetings, a meticulous attention to detail in documents—these are not just professional hygiene. They are signals to the client that they are being held by someone serious. AI can produce a competent memo. It cannot produce the experience of receiving a competent memo from a person who you know is responsible for getting it right. The performance is part of the product.
Saying “I don’t know” is much more valuable than it sounds. Lawyers who have learned to say “I don’t know, here’s how I’d find out” earn enormous trust. AI cannot do this in a way that registers as honest. When AI says “I’m not sure,” it sounds like a hedge. When a senior lawyer says “I’m not sure, here are the three things that would tell us,” it sounds like wisdom. Same words, very different effects.
What this means for fees
If the product is the feeling of being held, the pricing of that product is harder to commoditize than the production of documents.
A client who has a competent AI-generated draft of a contract still wants a lawyer to look at it. They are not paying for the production of the draft. They are paying for the verification that nothing terrible will happen when they sign it. This verification is, in some technical sense, less labor-intensive than producing the original draft. But it is more valuable, because the client cannot do it themselves—not because they lack the technical skills, but because they cannot generate, in their own mind, the certainty that the draft is safe.
This is, I think, the part of legal work that will hold its price longest. Document production will commoditize. Document underwriting—the lawyer putting their name on something and accepting responsibility for it—will not. The lawyers who reorganize their practices around being the trusted underwriter of AI-produced work will be in a structurally stronger position than the lawyers who try to compete with AI on production speed.
The lawyers who will struggle
The lawyers who are going to have a hard time are the ones who have built their careers around producing documents efficiently. The faster they produce, the more directly they compete with AI, and the more visible it becomes to clients that they are paying for a service that has cheaper substitutes.
The lawyers who are going to do well are the ones who have built their careers around being a trusted presence—someone the client wants to call when something goes wrong, someone whose voice on the phone is itself a relief. This skill is not taught in law school. It is barely acknowledged at most law firms. It is, however, the actual core of the profession.
I tell younger lawyers: spend less time perfecting your written work and more time being the kind of person clients want in the room when bad news arrives. That is the skill AI cannot copy. That is the skill that will pay you for the next thirty years.
The rest is, increasingly, mechanical.
Part of an ongoing series on what lawyers actually sell in the AI era. If this resonated, the previous article on answer-shopping clients covers the flip side of the same dynamic.
Email me at [email protected] if you’ve had a moment where you realized what your client was actually paying you for.