Ethical Issues Around AI Use in the Legal Industry

Published: May 22, 2024

 Education       Grad School       Law       Workplace Issues       
Article image

What does the future of the legal profession look like? This question is always salient, but the burgeoning use of AI throughout the legal industry has given it particular relevance. This post is in no way meant to be exhaustive. Rather, its intent is to surface some of the ethical issues surrounding the use of AI in the legal industry so that law students and attorneys alike can consider how they want the profession to look in the coming years and decades.

How AI Is Already Being Used

AI is already in use in some unsurprising and some surprising ways. For instance, it is fairly well-known that the first actual use of AI in the legal industry was in e-discovery. The advantages of AI in streamlining this normally cumbersome process should be obvious. In similar database-trawling veins, AI is already seeing significant use in the areas of legal research and document management. Finally, on the more advanced side of things, some more pioneering law firms are using AI to engage in predictive analytics, useful for estimating the odds of success or failure of legal maneuvers in different jurisdictions, and helping firms and clients to budget matters in advance.

The Ethical Issues Surrounding AI

There are a few angles through which this topic can be approached. For our purposes, we'll separate the biggest concrete issues out and give a brief overview of the thorny ethical questions posed by each problem.

Bias and Algorithmic Transparency

It's hard to follow the news about AI in general without hearing a new story about a prejudiced mistake made by one of the machine-learning algorithms on at least a weekly basis. These can range in severity from embarrassing but inconsequential linguistic gaffes to sorting targeted medical interventions in a way that predictably leads to inferior clinical outcomes for black patients. Another issue is that most of this technology is so cutting-edge that there does not yet exist any independent assessment of how effective it is at its purported functions. Firms and clients are essentially gambling on unproven technology, regardless of how promising it may be.

Liability When Legal AI Makes a Mistake

We currently hear a lot about this question as it relates to self-driving cars, but it is equally relevant to consider the question of professional liability. Any lawyer who has tried to go solo (or anyone who has worked on budgets at a BigLaw firm) knows how staggeringly expensive professional liability insurance can be. This is because the damages that can result for clients when lawyers fail in their professional responsibilities can be enormous. What happens when a legal AI makes a mistake that harms a client's interests? Is the company who developed it responsible, or is the attorney who used it? Is it both? How should that liability be allocated? These questions are an unanswered frontier of the law, and attorneys should at minimum be thoughtful about how they use AI in light of these issues.

AI Solutions and Non-Lawyer Practitioners

Legal services providers like LegalZoom, which offer solutions that don't involve close supervision by a licensed attorney, raise another significant area of concern. It stands to reason that such services would employ legal AI solutions as they become available, and then both the question of liability floated above as well as the way in which they can advertise such solutions become significant questions. If licensed lawyers and non-lawyer practitioners both use the same AI solutions, how must the non-lawyer practitioners differentiate themselves from licensed attorneys in their advertising? These and other issues give rise to yet another area where regulators and attorneys should focus their attention.

***

Obviously, this article has barely skimmed the surface of the morass of issues surrounding the use of AI in the legal industry, and we may well revisit the subject in future posts to discuss more of the issues we didn't introduce here. We don't have firm answers on what the solutions to these problems ought to be; rather, we encourage all interested parties, from students to attorneys to schools to firms to clients, to thoughtfully consider each of these issues and to be intentional about making decisions around AI solutions in our profession.

***