Blog Post
Jeremy Sheridan Discusses the Global Fight Against Fraud

FTI Technology Managing Director Jeremy Sheridan is an expert in blockchain and digital assets, supporting clients with digital asset investigations and providing expert testimony for cases involving cryptocurrency and other digital assets. With a prestigious background in law enforcement, Jeremy is also committed to innovation and collaboration in combatting financial crime. He recently participated in the first battlefront event hosted by the American Institutes for Research Global Fraud program, which helped participants from across different sectors learn about the rise of artificial intelligence-enabled and AI-driven fraud. In this blog, he shares key learnings from the event.
Agentic and generative AI are being increasingly used as tools to help bad actors defraud individuals and organizations. Law enforcement, investigators, AI experts professionals across the financial and emerging technology sectors and consumers will need to partner to stay ahead of the next generation of criminal activity. This is precisely what AIR is looking to facilitate.
A nonprofit, non-membership organization working to make the financial system fully inclusive, fair and resilient through responsible use of new technology, AIR attempts to connect regulation, finance, technology and society to help overcome the system’s legacy shortcomings and prepare it for rapid technology change. AIR’s Global Fraud program is committed to combating financial fraud on a global scale and prioritizes four key impact areas: consumer protection, consumer education, policy and technology.
In support of these aims, the battlefront event provided an interactive simulation designed to tackle the rise of AI-enabled and AI-driven fraud. Teams were provided with AI tools and behavioral insights to develop and test defensive strategies against AI-driven malicious activity.
The event divided participants into one of two groups: attackers or defenders. Participants developed AI-powered fraud schemes leveraging advanced technology, crafted defense strategies to detect and neutralize these exploits and used workshopping techniques to ideate and refine solutions. Strategies were presented in an interactive face-off, where fraud teams launched AI-powered attack scenarios and defense teams attempted to counteract them.
Many seasoned investigators remember handling cases involving the early social engineering scams that used either paper or email versions of fabricated letters claiming to be from a member of royalty or a wealthy family in another country. The letters would describe an unjust imprisonment or loss of access to a fortune that “only” required a small fee to be paid in order to retrieve the treasure. The letters would leverage the usual tactics of urgency, vast wealth and the request for personal financial information.
As far-fetched as these letters sound, they were unfortunately highly effective. Fast forward to today, when the underlying methods of these letters are combined with the resources of AI, target research and digital payment methods. It is easy to see how fraudulent activity continues to flourish and becomes dangerously scalable.
This is not only applicable to those who are vulnerable and susceptible to a romance or pig-butchering scam. Anyone can be a target, and the advent of AI-generated content makes all kinds of schemes exponentially more successful.
“My mother passed recently and in the stress and distraction of trying to handle the transition of her property, I fell victim to a fraudulent check scheme that targeted me through an online marketplace. If it can happen to a former law enforcement officer who has investigated financial crimes for close to three decades, it can certainly happen to those less aware of the threats.”
To that end, the AIR workshop demonstrated the widespread availability of AI tools that can be leveraged for fraudulent purposes. Participants included a collection of professionals from various fields, some of whom had technical backgrounds and AI knowledge and many who held non-technical positions. Regardless of their skills, the fraudster teams were able to use open-source, publicly available AI tools to build generative AI versions of government-issued documents, video or voice messages, advertisements for fake companies, and synthetic identification documents with relative ease in a very short period of time. These were created with specific targets in mind, such that the videos or voice messages tailored the content to gender, nationality, language and dialect to more effectively elicit the intended response from the victim.
These weapons are effective in any environment, but there are elements of the cryptocurrency landscape with heightened vulnerabilities that contribute to their success. Opportunities are sought by the criminal element based primarily on the continued increase in value of many cryptocurrencies and on the speed at which transactions are executed. For example, falsified AI-generated transaction screenshots can be used to misrepresent token performance as a way to facilitate investment schemes. AI prompts can be designed to effectively create rapport, leverage emotion or communicate empathy with targeted victims and more effectively target lonely or overconfident investors. Deepfakes can be used to socially engineer and promote fraudulent investments by misrepresenting support from corporate, entertainment or industry influencers. Synthetic identities that use real data from harvested credentials or users who reuse credentials can bypass unsophisticated know your customer controls.
This challenge is likely to continue for the foreseeable future as digital assets become more ingrained into the financial system, while remaining a nascent concept in many ways. For example, recent legislation has allowed 401K retirement plans to include digital assets, which will introduce new transaction patterns, data types, trading partners and communication platforms to those involved. This unfamiliarity can create opportunity for AI-driven fraud as market participants don’t know what to look for and may assume a fake transaction, especially if enhanced by AI, is legitimate.
Response and defense
The defenders of the scenarios and attack schemes created by the attackers demonstrated an equal amount of creativity and application of AI tools. Despite the effectiveness and depth of the AI-generated attack methods, similarly designed AI assistants were set up to analyze context, content and patterns of received data. While the full scope of designing these tools would be difficult to replicate, the defense teams were able to implement contextual considerations such as user behavior, location and device usage. They further deployed AI-based tools that established transaction patterns and data requirements to determine what would constitute suspicious behavior without interrupting legitimate transactions.
As with any decision of consequence, these tools were built on a risk management framework where defenders had to analyze the risks involved and implement their plans based on risk tolerance. The risks they considered were based on factors such as location and device usage to identify transactions from unrecognized areas or systems, pattern recognition to flag activity occurring at abnormal or inconsistent times, establishing threshold values that identify outlier volumes, categorization of types of assets involved to distinguish novel transactions, or reputation of the counterparty.
In the exercise, a key attribute of the most successful defense teams was human oversight to check the automated processes for accuracy and prevent false positives. This human element was combined with other friction points that didn’t solely rely on AI solutions and leveraged safeguards such as multi-factor authentication, behavior biometrics and employee education and training.
Blockchain nexus
It’s important to reiterate the fact that the cryptocurrency environment is not solely suited for fraudulent activity. That is a myth. Despite misrepresentations of cryptocurrency as exclusively a tool for criminal activity, the truth is that cryptocurrency and the blockchains on which it is built provide an effective solution to the kinds of AI-generated fraud displayed in the AIR exercise. Blockchain technology can be a primary technology to verify the authenticity and integrity of AI-generated data. Blockchain can be used to create an immutable record of the data generation process, including the AI model used, input data and parameters. This allows for the verification of the data's origin and ensures that it has not been tampered with.
AI-generated data can be digitally signed using blockchain-based cryptographic techniques to ensure the data is authentic and has not been altered during transmission or storage. Blockchain's hash functions can be used to create a unique digital fingerprint of the AI-generated data, such that any changes to the data would result in a different hash value, allowing for the detection of tampering or manipulation. Blockchain delivered smart contracts can be programmed to check AI data's integrity and authenticity before allowing it to be used in a specific application.
The broader community will need to continue to be vigilant in the face of bad actors looking to stay ahead of consumers and enforcement. This will include leveraging innovative technologies, including solutions that combine AI and blockchain, to equip experts and investigators so they can more effectively prevent and enforce against crime.
Related topics:
The views expressed herein are those of the author(s) and not necessarily the views of FTI Consulting, its management, its subsidiaries, its affiliates, or its other professionals.