Blog Post
AI in Investigations: Tailoring Capabilities to Specific Tasks
When a global consumer foods company became subject to a sensitive investigation, the legal team was faced with finding and analysing related electronic documents. In a matter of weeks, the organisation needed to sift through a massive amount of data spread across numerous systems, languages and geographies. At a time when a lack of resources undermined the organisation’s ability to conduct a timely investigation, artificial intelligence presented a compelling solution. A team of data scientists, AI-trained reviewers and prompt engineers refined prompts for accuracy and used generative AI to accelerate key aspects of discovery, reducing workload while upholding a high standard of quality control and allowing for effective completion of the investigation.
Like in this investigation, when generative AI is applied with expert oversight, tailored to the unique attributes and needs of an investigation, it can deliver impactful results. It’s not a panacea, though. Different types of investigations have specific requirements, which AI may or may not be able to fulfill. And other variables, such as the type of model being used, the risk profile of the organisation, the data sources in scope, the jurisdiction, etc., will all have bearing on whether and how AI will be effective.
Across the breadth of different types of investigations organisations face, ranging from narrowly focused internal investigations to multi-jurisdictional regulatory responses, various stages and adjacent digital forensics and discovery best practices come into play. Any strategy for using AI should be determined built around these differentiating factors.
For example, in an internal investigation in which an organisation needs to quickly determine if one departing employee took sensitive or proprietary materials with them when they left, generative AI might be helpful in scanning and summarising that individual’s files and activities leading up to the departure to identify any suspicious behaviour. Or, in a data breach response matter, teams might speed up the process of determining what sensitive data was exposed by using generative AI to identify personally identifiable information in large batches of documents, helping determine the nature and extent of the incident. Separately, in a large, complex government inquiry, AI could be used to help construct privilege logs for privileged material that may be in scope for production to regulatory authorities.
The ability of different generative AI functions to adapt to a wide range of scenarios can significantly enhance the investigation process when a team knows when and how to use them. The technology is especially effective in analysing emerging data such as voice messages and images shared in chat threads, a common challenge investigative teams face. These modern data types are becoming more common in modern investigations and generative AI can help to manage and analyse them faster than traditional methods.
Several examples of how generative AI can be applied to specific types of investigations and specific investigative tasks are outlined here:
- Early-stage investigations: Early on in a potential matter, when legal teams need to quickly understand whether an issue or suspected violation will require a full blown investigation, generative AI provide insight to inform decision making. These tools can perform tasks such as identifying and summarising key documents, allowing teams determine the best course of action or whether to prepare for further inquiry.
- Time-sensitive matters: Whether due to a regulatory probe or a sensitive internal matter, cases that require rapid response benefit from early insights. Generative AI can be pointed at large datasets to provide chronologies, surface patterns and identify hot documents that help prioritise documents for review.
- Expansive investigations: Large scale investigations, such as a merger clearance reviews or regulatory matters spanning numerous jurisdictions, require teams to analyse and understand massive sets of documents and complex data sources. AI can support efficient data exploration to help reduce the volume of documents that must be reviewed by humans, enabling faster and more cost-effective reviews.
- Risk assessment: Many investigations require a risk assessment during which the legal team determines the organisation’s exposures and the potential outcomes of different courses of action. AI can help in assessing these risks by identifying key facts and quickly uncovering evidence that might be missed in traditional review.
- Code word detection: Generative AI has shown strong potential in detecting anomalous content, such as code words or unusual calendar entries that might identify suspicious activities. Such insights can be especially useful during investigations into employee conduct, intellectual property misappropriation, fraud, etc.
- Whistleblower response: Generative AI can help legal and compliance teams ease the burden of reviewing and responding to whistleblower reports by interpreting complaints, asking follow-up questions dynamically and identifying documents that may be relevant to the report.
- Quality control: AI is meant to augment and complement human-led workflows. Generative AI tools can be added as an extra layer to investigative processes to catch errors human processes might miss, reducing the likelihood of missing key facts and enhancing overall reliability.
While there are many scenarios where AI is proving effective, it’s key to also understand where generative AI may not provide a reliable solution. Algorithmic biases and gaps in the data used to train a model can limit AI’s effectiveness when certain attributes are present. For example, consider an investigation involving documents in languages that are underrepresented on the global stage. If a model has not been adequately trained on those languages, it will be far less accurate at analysing the material and picking up on relevant content and context. Similarly, classified or secret data might not be well-represented in AI training data, so if it’s applied to a dataset with classified information, the model may not know what patterns or words hold relevance. Therefore, it’s important for legal teams to be aware of what the models have and have not been trained, so potential biases can be mitigated.
Unlike traditional e-discovery and investigative methodologies like predictive coding, AI's diverse capabilities require enhanced validation strategies to measure for accuracy and defensibility. Similar to watching for algorithmic bias, teams should work with experts to ensure the right validation strategy is being applied according to the task for which AI was used. With the right human oversight and a customised approach, legal teams have an opportunity to leverage the power of generative AI to streamline their investigative workflows, enhance understanding of high stakes matters, improve their ability to meet deadlines and reduce costs.
Related topics:
The views expressed herein are those of the author(s) and not necessarily the views of FTI Consulting, its management, its subsidiaries, its affiliates, or its other professionals.