Blog Post

Understanding the Intersection of Compliance and Generative AI

When Department of Justice Deputy Attorney General Lisa Monaco spoke at the American Bar Association’s National Institute on White Collar Crime in March, she clarified the department’s views on the role of artificial intelligence in committing, monitoring for and prosecuting against corporate crime. A key point was that legal and compliance teams should be proactive about the use of AI within their organizations as a potential source of risk, and that the DOJ may take action against organizations for failing to properly monitor and govern AI use. 

In short, the need to approach AI judiciously and with rigorous governance and compliance frameworks is increasingly magnified. Moreover, the DOJ’s latest comments should be considered as a potential signal of impending revisions to guidance around AI, data and compliance monitoring requirements. 

This heightened scrutiny comes as the dynamic technology behind AI is gaining adoption across a variety of compliance use cases. Current and emerging applications of advanced analytics and generative AI offer myriad opportunities to streamline compliance workflows in the areas of third-party risk management, whistleblower hotlines, investigations, risk detection and scoring, and due diligence screening. AI can also be used to monitor communications in near real time, detecting unusual patterns indicative of non-compliant activities. These use cases can be leveraged as the building blocks of a risk-resilient AI program that proactively assesses risk and enables mitigating action.  

In turn, there are also significant risks and ethical considerations that must be addressed before AI can be effectively implemented. Issues of bias, accuracy, security, privacy, data provenance, intellectual property exposure and regulatory requirements are among the many potential pitfalls. With increasing regulatory attention, compliance, risk and legal professionals will need to proactively assess these risks against the potential benefits of AI implementation to ensure that its use is both intentional and controlled.

Indeed, an increasing openness to adoption of generative AI across enterprises, and within the compliance function, signifies a new chapter in compliance management, where advanced technology stands to transform traditional compliance practices and unlock new efficiencies. 

FTI Technology’s Digital Insights & Risk Management experts are closely evaluating and testing the emerging technological, regulatory, ethical, practical and operational risks and opportunities in the evolving AI landscape. The latest white paper from FTI Technology’s Risk & Compliance practice provides a detailed look at the top issues for chief compliance and chief risk officers. 

Risk areas discussed include:

  • Bias and fairness: generative AI models can amplify existing biases in a feeder dataset it is trained on, resulting in potentially misleading, inaccurate or discriminatory outputs.
  •  Misrepresentation of information: this technology could potentially be used to fabricate information, such as financial statements, making corporate fraud hard to detect.
  • Dependency and overreliance: a lack of human oversight across generative AI applications and outputs could lead to missed compliance issues that an automated system may not capture. For example, the tool could hallucinate potential issues that are not actual causes for concern or struggle to handle scenarios arising outside those on which the tool was trained. 
  • Transparency and auditability: many models are black boxes with little insight into what datasets are being used to train the system, the quality of the algorithms, and limited ability to interrogate the reasoning for the tool’s decisions. Where decision transparency is critical, generative AI can significantly complicate investigations by obfuscating responsible parties and audit trail capabilities.
  • Unintended decision making: algorithms can produce content or prompt actions not fully anticipated or intended in the design, leading to non-compliance with regulations or industry standards. 
  • Data privacy concerns: models that produce synthetic data might unintentionally generate information resembling real individual data, leading to potential privacy breaches. In addition, information fed to the model that contains personally identifiable information could be used in a manner that is in violation of data privacy laws or otherwise compromises personal data. 
  • IP exposures: Who owns the data set leveraged for training the model? Who owns the algorithm for creating the model? Who owns the licensing of the system? Compliance officers will need answers to these and other complex questions as the use of generative AI expands across the enterprise. 

The paper also covers the key areas of opportunity for compliance teams to leverage AI solutions, including:

  • Third Party Risk Management
  • Regulatory Compliance Monitoring
  • Compliance Program Assistance and Training
  • Data Quality and Compliance

Generative AI is likely to play a wide range of roles in reshaping organizational approaches to risk and compliance. Corporate compliance officers who proactively mitigate impending risks will enable their organizations to approach adoption from an informed and vigilant position. To learn more, download the white paper: How Compliance and Risk Officers Can Balance the Benefits and Risks of AI in Compliance.
 

The views expressed herein are those of the author(s) and not necessarily the views of FTI Consulting, its management, its subsidiaries, its affiliates, or its other professionals.