Machine learning and AI in financial crime

November 2024  |  TALKINGPOINT | FRAUD & CORRUPTION

Financier Worldwide Magazine

November 2024 Issue


FW discusses machine learning and AI in financial crime with Corey Dunar at BDO USA, P.C.

FW: How would you characterise the evolution of financial crime across the globe? What methods are fraudsters using to exploit digitalised systems and processes for illicit gains?

Dunbar: Financial crime has significantly evolved with digital technologies, with fraud becoming increasingly prevalent. Fraud can take many forms – including identity theft, insurance fraud, accounting fraud, investment fraud and procurement fraud – and can be propagated in many various ways. Cyber criminals often use social engineering to trick individuals into revealing sensitive information. These schemes are becoming more sophisticated with artificial intelligence (AI)-simulated interactions and communications that mimic human interaction. Alarmingly, threat actors are collaborating to offer ransomware as a service or act as access brokers for large institutions. Companies now face not just isolated ‘bad apples’ but organised groups seeking illicit gains or large-scale disruption. The increasing interconnectivity of financial systems with other systems storing sensitive information complicates the issue. A single breach can have far-reaching consequences, making it crucial for organisations to monitor and detect anomalous behaviour.

FW: To what extent are you seeing an increase in technologies such as artificial intelligence (AI) and machine learning (ML) being used to fight financial crime? How would you describe the appetite and uptake in recent years?

Dunbar: AI and machine learning (ML) have been used to combat financial crime for decades, particularly within fraud, anti-money laundering and know your customer programmes within payments processors and financial institutions. Advanced applications such as generative AI, which includes ChatGPT, are becoming more common within the day to day operations of large organisations. Similarly, senior leaders within these organisations are asking how they can generate value by incorporating these technologies into their operations. It may not surprise you to learn that even regulatory bodies and government agencies are issuing opinions on the use of these technologies to combat financial crime. Risk management professionals need to capitalise on this momentum to advocate for investments in AI and ML for compliance purposes, just as companies are investing in them for commercial and enterprise applications.

FW: In what ways are AI and ML solutions typically being deployed? What key benefits do they offer?

Dunbar: AI and ML solutions are typically deployed in transaction and communication monitoring, customer due diligence and quantitative risk assessments. For instance, ML models can be trained to detect anomalies in transaction data, flagging potential fraudulent or non-compliant activities for further investigation. This can help risk management professionals analyse full sets of data rather than subsets while modelling behavioural patterns within these data sets. For example, AI models can be used to quickly analyse and understand customer behaviours. Imagine a customer that makes small, local purchases and then suddenly starts making large, international purchases. On the surface, the individual transactions may not raise scrutiny as they appear within normal purchasing patterns more broadly. By looking at larger sets of data over time though, these models can help companies see these trends for what they are: anomalous behaviour. These technologies offer several key benefits, including improved accuracy in detecting financial crime, reduced false positives and enhanced efficiency in compliance processes.

These technologies offer several key benefits, including improved accuracy in detecting financial crime, reduced false positives and enhanced efficiency in compliance processes.
— Corey Dunbar

FW: What issues do companies need to consider when choosing and implementing AI and ML solutions to monitor and detect potential financial crime?

Dunbar: When choosing and implementing AI and ML solutions, companies need to consider several factors. First, data quality is crucial, as poor-quality data can lead to inaccurate results. It is a fallacy to assume all large organisations by default have clean and accurate data sets. Be prepared to address data management and governance needs along the journey. Second, companies must ensure that their AI models are transparent and explainable to meet regulatory requirements. Regulators are weighing in on appropriate use of these models in companies, seeking to protect individuals and their underlying data. Last, do not lose sight of the target audience. Well-designed models can fail simply due to a lack of adoption or understanding by the target audience. Do not provide the same data to senior leaders and the level one analyst responsible for triaging the model’s outputs. It is important to chart a course before embarking. Road maps can be very helpful.

FW: How important is it to retain human oversight when deploying this technology?

Dunbar: Retaining human oversight is essential when deploying AI and ML technologies. While these tools can significantly enhance the detection and prevention of financial crime, human judgment is crucial for interpreting complex cases and making final decisions. For example, AI might flag a transaction as suspicious, but a human analyst can provide context and determine whether it is genuinely fraudulent. Human oversight also helps in continuously improving AI models by providing feedback and addressing any biases or errors that may arise. Similarly, human oversight ensures the human element remains at the centre of applications of these technologies. This can help ensure no bias is being introduced to these models to sway the outcomes intentionally or that the end outcomes themselves are not being used for unintended purposes.

FW: What essential advice would you offer to companies on harnessing the power of AI and ML to effectively mitigate financial crime, while managing potential risks and liabilities, and maintaining regulatory compliance?

Dunbar: Companies should start by clearly defining their objectives and understanding the specific risks they face. The risks facing a payment processor may be different than the risks of a broker-dealer firm. Anchor a programme in the outputs of a risk assessment. Avoid adopting a ‘one size fits all’ programme; instead, ensure that resource allocation is directly aligned with addressing the highest risk. From there, assess the data landscape to understand what is possible before exhausting available resources. This further illustrates the need to design first and then develop. Similarly, collaborating with subject matter experts, such as risk owners in the organisation, can help in selecting the right AI and ML solutions tailored to their needs and ensure the intended audience is onboard from day one. Companies should also implement robust governance frameworks to oversee AI deployments and ensure compliance with regulatory standards. This is increasingly important when it comes to elements of data privacy, as the misuse of sensitive information in AI models can have very negative outcomes.

FW: Looking ahead, what future opportunities could AI and ML bring to financial crime prevention? Is continuous innovation vital to keep companies ahead of malicious actors?

Dunbar: AI and ML can provide immense benefit when dealing with large sets of data where relationships are hard to uncover or seemingly not present. The time taken by analysts to trace the lineage of data and understand the relationships within will be drastically reduced by tools like graph databases as an example. My sense is that as organisations generate more data across a wide variety of their business practices, the opportunity to integrate AI and ML into risk management will only increase in this way. Companies that are not building a foundation now will be playing catch-up later. Beyond technological innovations, companies should be seeking to upskill their employees to understand the benefits and use of these technologies. Staying ahead of malicious actors requires ongoing investment in research, and collaboration with industry peers and regulatory bodies to share best practices. Do not be left behind.

Corey Dunbar is a principal in BDO’s forensics practice. He specialises in data analytics for detecting fraud, bribery, corruption and compliance risks, focusing on enabling compliance programmes through technology. With extensive experience in heavily regulated global industries, he designs and implements compliance monitoring solutions, predictive models and forensic data mining techniques. He also advises clients on leveraging analytics and technology, designing operational elements of compliance programmes, and evaluating capabilities from a people, process and technology perspective. He can be contacted on +1 (732) 621 5082 or by email: cdunbar@bdo.com.

© Financier Worldwide


THE RESPONDENT

Corey Dunbar

BDO USA, P.C.


©2001-2024 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.