15 Comments
Jan 24Edited

Thanks. This is an excellent article. Without having read the underlying work yet, which I fully intend to do, I just want to note one thing as a former law clerk to a federal judge and 20 year lawyer, and that is the suggestion that there are right/wrong answers in the law. This is, I think, almost universally a misnomer. Perhaps it is true if you are consulting regulations to determine the maximum permissible height of the hedges in your community, but the sort of litigation disputes that overload our judicial system almost universally arise from a situation where the controlling law is not in dispute but rather from competing interpretations of the law, which is more often than not a judicial opinion crafted by an eminently fallible human judge (or more likely, their law clerk) and not a regulation that contains precise requirements. The skill in a lawyer comes not from memorizing or regurgitating these laws (which change constantly, another obvious challenge for AI models) but in finding the most creative argument in favor of your client. People laughed when Bill Clinton said “it depends on what the meaning of “is” is.” But as a lawyer, it was a perfectly legitimate point. I struggle to believe we’re anywhere near a model that can distinguish the various interpretations of the meaning of a word like “is” and identify and cogently and persuasively argue the one that best supports their client.

As you quite rightly say, there is (and long has been) plenty of room for AI and automation in law, and indeed it is an essential component of how I do my job. But I do not suspect we will see any sort of AI displace the most essential work that lawyers do in my lifetime.

Full disclosure: I may be slightly biased. :)

Expand full comment

Nicely done. I'm a law professor, though never spent much time litigating. In jurisprudence, there is a common problem known as "reification," that is, treating a set of more or less dynamic relations as a thing. To some extent, that is what law does. We want a house title to be a house title, for example, and for many purposes, reification does no harm. But most of law is the formalization of human and institutional legal relations which are non-binary and which shift in time, and for which the costs/risks/opportunities also shift. So the AI project is, jurisprudentially speaking, ontologically primitive. To bring the matter home, maybe, what is "the law" in OpenAI debacle? In Silicon Valley Bank? All of this is before, but related to, understanding law as performative, lawyers as officers of the court, signatories as bound. Consider, in this regard, Ukraine/Russia, or a marriage. There is a sense in which AI is, at its best, doing law manque, by creating outputs that mirror human processes. This is like thinking that chess computers are playful.

Expand full comment

Nice article! Your typology of legal application while useful at 30k foot level provides an overly reductive view of what discovery is. Discovery at its heart is fact-finding and evidentiary review. Most litigation cases get resolved during the discovery phase (see law & political economy critiques of the "vanishing trial"). And while most of the history of discovery could be construed as information "processing" (from the 80s to early aughts), the interpretive work that contemporary discovery sociotechnical systems perform stretches the "processing" metaphor to a meaningless zone. While I'm definitely on board with your critique at broad strokes, the lack of empirical accuracy about what's going on on the ground undercuts your argument. Contemporary discovery technology (always multi-modal--matching affordance to need when done well--leveraging GOFAI, LLMs, GenAI, etc.) is helping senior litigators analyze which of their top-down claims are actually supported by bottom-up analysis and synthesis of extremely large data sets. This work is NOT about twiddling around on benchmark data sets. This work requires computational linguistic pragmatic and semantic analysis (fields that have existed for decades) that scale to monster troves of government and corporate data. Calling this work "processing" ignores the history of computation, information systems, and natural language processing/understanding.

To your titular question: AI already has already started transforming civil discovery (and the corporate litigation world). We are still in the process of transformation as these automated review and analysis tools moves beyond the early adopter set (and I agree GenAI is more a bump in a larger transformation). Just look at how law firm organizational structure, career pathing, and attorney training is changing to see evidence. "Will AI transform the law" in itself is a hype-friendly overly-abstracted formulation to pose the important questions here. We need to go much deeper into the complexities of each use case, the tools that appear to provide value (or conversely those that don't and actually introduce risk).

Many legal technologists have XeroxParc mentors if they have been around for long enough. This type of debunking you are doing would benefit from the approach of folks like Lucy Suchman and John Seely Brown who did in-depth analysis of particular uses cases while also pursuing their critical (ethnographic) research debunking tech smoke and mirrors. We also have to be mindful of pernicious tropes and binaries like "processing" versus "judgment" especially when you are talking about sociotechnical systems where the very point is to transform the approach to human reasoning through the aid of computation.

Expand full comment

Our approach at Pre/Dicta fundamentally differs from previous attempts to use statistical data for predictive purposes and confirms the article's findings. Indeed, for us to predict outcomes, it required more than text-based analysis or just PACER or other raw statistical data.

Our analysis determined that combining biographical data with proprietary classifications of parties and attorneys can achieve a high accuracy rate for motion predictions. This approach is more in line with behavioral analytics than just statistical modeling.

It is correct that grant/denial rates vary over case type, but case type alone is still insufficient for predictions. Case type is just one variable factor, but without understanding which case characteristics indicate grant/denial, those are non-predictive (to use the well-known trope: past performance is not an indicator of future results).

Expand full comment

Excellent piece, and I really appreciate the link to your paper, Promises and pitfalls of artificial intelligence for legal applications.

One question - you mention that the application of "predicting judges’ decisions before they happen" seems to be limited to the research world for now. What do you think of companies like Pre/Dicta (https://www.pre-dicta.com/), which claim to "predict judges' decisions with just a case number"? Are they doing something different from what you are thinking of?

Expand full comment

Yes, I think it's different. Our discussion is about predicting decisions from _text_ (this is mentioned in our paper but not in the blog post, sorry). The implicit claim is that AI is analyzing the facts of the case.

On the other hand, Pre/Dicta claims to use as features: NATURE OF YOUR SUIT + PARTIES + LAW FIRMS + YOUR JUDGE (EVERY PAST DECISION + BIOGRAPHICAL DATA + POLITICAL PARTY + NET WORTH + WORK EXPERIENCE + LAW SCHOOL)

This is much more believable — judges have various biases and there's no doubt a simple statistical model can pick up on that. They are not claiming to analyze the specifics of the case beyond the "nature of the suit".

It's unclear how useful this is. My understanding is that the average grant/deny rate of motions to dismiss varies a lot between different types of cases, so a model that has learned such coarse statistics can probably achieve an impressive-sounding level of accuracy, but it's not clear it tells lawyers anything that isn't already obvious to them.

Expand full comment

> They often use the judgment text containing the final judgment to ‘predict’ the verdict — a blatant example of leakage. Since the text of the final judgment includes the verdict, the model has access to the answer when making its prediction.

This is not what Medvedeva & McBride says, per my reading of the paper. Leakage does take place, but the mechanism is a bit more subtle. Rather, the "facts" section of judgements, which - in theory - are meant to be an impartial recap of the said facts, are in reality stacked in support of the verdict. (I guess judges don't like plot twists in their decisions.) For instance, only relevant facts are included in the judgement, which is by itself constitutes a judgement call.

Expand full comment

In the corpus they analyze, they find both issues (some papers use the final judgment as an input, others use facts used in the final judgments).

(This supplementary sheet is the ground truth for their analysis; see the column on "input data": https://docs.google.com/spreadsheets/d/1mWp7k_jA8exLYrT9R5LllxEVfW6Tz1Wg_SlJsO1Vag8/edit#gid=0)

Expand full comment

Thank you! That's useful, although I spot-checked a few papers from the "final judgements" input category (those that were easily accessible) and they do make _some_ attempts to mask the verdicts. I guess the incentives are stacked the wrong way for this part of their projects ; -)

Expand full comment

I'm working on a similar article on this topic at the University of Antwerp. Great to read your insights. :)

Expand full comment

That's cool

Expand full comment

Excellent. Wondering if this has broader implications/applications "Use AI in narrow settings with well-defined outcomes and high observability of evidence."

Expand full comment

I definitely think law will change because of it

Expand full comment

Did this research address transactional legal work? I suspect that this is an area that GPT-4 and Claude is already quite capable at. More importantly, I suspect that it represents a significant portion of AmLaw 100 revenues, and if this work can be more effectively automated, it could make a meaningful dent.

Expand full comment

David, as you are probably aware, documents in document heavy transactions are almost entirely cut and pasted. Large firms are using AI to do a lot of the work. My understanding, however, is that this pretty bespoke -- the training data is the large firm's library of tried and true documents. If anything, gives big firms even more power to determine what constitutes what transactional lawyers call "market," the set of compromises that the profession deems ordinary/acceptable. More deeply, prestige lawyering is, well, driven by prestige -- not a cut rate enterprise.

Expand full comment