Blog Post

Replacing Lawyers with Robots? A Discussion on the Ethics of AI and Machine Learning in Legal Document Review.

This discussion wasn’t new, however, and it reminded me of the early debates I listened to when AI and machine learning were beginning to be adopted in e-discovery for the purpose of legal document review. At that time, many thought legal jobs were at risk. The industry started to wonder whether we should feel guilty about using these tools if they could result in fewer people being employed.

In practice today, an AI review helps legal team quickly organize data — via lawyers reviewing documents as responsive or not responsive — which trains the AI software to predict which documents are most likely to be relevant (also commonly referred to as technology assisted review (TAR)).

In 2012, I had worked on my first large matter concerning an environmental disaster in the U.S., but this was a few years prior to AI-driven reviews being accepted by courts in the U.S. and U.K. as a valid approach for e-discovery. The ‘manual’ approach used at the time resulted in a small army of review lawyers being hired to sit in an office for 12 months, clicking through millions of digital files extracted from corporate email inboxes and network shares.

My first AI driven case was only two years later, and concerned a public inquiry investigation into an infrastructure project. Approximately 10,000,000 electronic files were available for legal review, which was widely considered an exceptionally large data set at the time. In contrast, now a document set of that size is a typical mid-size disclosure exercise (for example, one current case has more than 120,000,000 electronic files available for review).

During the 2012 matter, we would run over-inclusive search terms and batch out hundreds of thousands of electronic files to be manually reviewed by dozens of lawyers. Yet by 2014, while utilising AI for the first time, review rooms were quieter and teams were smaller. Had the robots silently taken over?

I eventually reconciled my own worries about this, partly through the experience gained from working on AI-driven projects, and partly by following how the industry has matured over the last decade. A critical point is that when you are using the “robot” correctly, it can help to find precise facts faster and enhance human ability and intelligence, to the point that complex concerns like data privacy can be addressed much faster.

Artificial intelligence in e-discovery

The persistent myth, though, is that AI-driven legal reviews require fewer document reviewers, therefore leveraging AI in a document review results in lawyers missing out on gainful employment.

This statement may be true if one accepts the premise that AI-led reviews introduce efficiencies. A more efficient review, one would hope, would result in lower costs via fewer billable hours. The industry sees this as one of the primary reasons AI reviews are adopted, so logic dictates that AI results in a reduction of document review hours, and thus, fewer humans employed.

However, this has not occurred in the way it was first predicted. Firstly, large projects in EMEA are struggling to recruit and retain enough document reviewers. There is still a great deal of work out there, and for lengthy durations too. Furthermore, the pandemic compelled legal services companies to introduce secure methods for document review team members to work remotely. We now see the geographic reach a document review job candidates has, which in certain cases, has expanded to the entirety of the EU. Provided that the IT infrastructure security prerequisites are in place, a document review project can kick off in a matter of days, from recruitment right through to reading into protocol documents, and daily review briefings being conducted via video conferencing. FTI Technology utilises a toolset to satisfy the most stringent of data protection rules and to ensure a review team can operate as effectively remotely as they would were they located within the same building.

Another area to be optimistic that the robots are not winning is in the ongoing exponential growth of data sets. It would not be feasible to staff a document review like we did back in 2012 for a dataset from 2022. There is simply too much data to not leverage AI to help sift through it. Employees are generating more data than ever before: Microsoft Teams, Slack and Discord chats, mobile devices, SharePoint sites, large PSTs.

Another advantage of AI-led review is to boost the quality and number of potentially relevant documents delivered to the review team, and earlier on in the case too. As a document reviewer, it can be extremely disheartening to be assigned a large batch of documents comprising obviously not relevant material. I can recall occasions where search terms were used to identify the review set, an overinclusive term hit on a file path (containing thousands of junk files), and every single one of those files had to be reviewed, because that’s what the review methodology dictated. But an AI review would learn from the “not relevant” designation applied to the junk files and quickly cease to provide them for review.

This also makes it much easier to avoid wasting review budget on false positives. Also, a well-trained AI seed set can be recycled over and over again. At FTI Technology, we have gained experience applying existing training sets to “fresh” documents. This significantly speeds up early case assessments and fast-moving target identifications.

The European Commission has developed a strategy of promoting the use of AI that ensures it works for people and is a force for good in society, by utilising a “human-centric approach to AI, while ensuring safety and fundamental rights are protected...” Does an e-discovery AI approach meet these criteria? I would argue that, on balance, yes, it’s a tool we use to bring about positive outcomes for society.

1Rio Tinto PLC v. Vale S.A., 306 F.R.D. 125 (S.D.N.Y. 2015) (black-letter law that responding party can use TAR / AI approach)
2https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence

The views expressed herein are those of the author(s) and not necessarily the views of FTI Consulting, its management, its subsidiaries, its affiliates, or its other professionals.