GenAI-enabled fraud: fighting fire with fire

September 2024  |  FEATURE | FRAUD & CORRUPTION

Financier Worldwide Magazine

September 2024 Issue


Technology is a tool that is neither good nor bad. It has the capacity to be a catalyst for change or have a detrimental impact on society – the difference depends on the intent of those that wield it.

Generative artificial intelligence (GenAI) is one such tool. A global market that is expected to soar to $1.3 trillion by 2032, GenAI can be used to ease workloads, add labour efficiencies and reduce manual touchpoints. Conversely, its rapid adoption also has considerable potential to be used for fraudulent purposes.

And the level of fraud is significant. A May 2024 survey report by Sift found that 68 percent of consumers noticed an increase in the frequency of spam and scams from around November 2022, when GenAI tools began to be adopted at scale. In terms of online payment fraud alone, global losses are projected to jump from $38bn in 2024 to $91bn in 2028.

“GenAI is being rapidly adopted in various sectors for its ability to write text, create code and generate hyper realistic photos, videos and audio – and these features are also proving attractive to fraudsters,” states the Swift report. “The power and speed of AI tools deceive or manipulate victims on a mass scale, targeting individuals in highly personalised ways. They are similar to typical online scams, but AI algorithms give bad actors additional advantages.

“What is more, the widespread availability of GenAI is leading to the democratisation of fraud, allowing individuals without much technical knowledge to become fraudsters and quickly attack targets,” continues the report. “Such scams pose significant risks, including financial losses, reputational damage and psychological harm.”

In the corporate world, and particularly for attractive targets such as the financial services sector, there is great potential for malicious actors to deploy GenAI to commit fraud in many different forms. Moreover, as GenAI becomes more accessible and impressive in its output, fraudsters are emboldened to further exploit its power to deceive and harm – a trend that could have devastating consequences for organisations’ business operations.

Types of GenAI fraud

The types of fraud being perpetrated by GenAI are numerous and growing. According to analysis by Fourthline – ‘How Generative AI will power fraud in 2024, and what you can do about it’ – GenAI is accelerating fraud in two key respects: sophistication and scale.

This sophistication and scalability includes AI bots that scrape personal information from sources such as social media platforms and online databases to create convincing fake profiles, as well as the adaption of GenAI tools (including ChatGPT and dark web ChatGPT-like products such as WormGPT and FraudGPT) to create convincing phishing emails, cracking tools and carding (a type of credit card fraud).

As GenAI continues to drive a transformative shift in how cyber fraud is being fought, organisations need to tailor their approach accordingly, such as enhancing their ability to identify novel frauds patterns, and refining extant systems to engage emerging threats.

Another related scam is password spraying, where AI generates lists of most-used passwords in a certain industry and criminals then test these across accounts in a targeted organisation. Account takeovers (ATO), where fraudsters gain unauthorised access to the accounts of online businesses with stolen login credentials, is another password-based scam.

Also on the increase are deepfake videos and voice and sophisticated fake IDs made by machine learning (ML), which are being used for social engineering attempts or to beat authentication systems during customer onboarding or anti-money laundering (AML) checks.

“The area of GenAI we are most concerned about is deepfakes,” says Rahul Mahna, a partner at Eisner Amper. “This technology has been around for a few years, but only in the last year or so has it become so impactful that it is causing real damage.

“Commercially, it has been well-documented that GenAI deepfake videos have been created to extract millions of dollars from multinational firms,” he continues. “In addition, we have seen a rise in deepfake audio calls where family members are impersonated to procure thousands of dollars in falsely created crisis situations.”

The use cases outlined above are just a small subset of the rapidly evolving and expanding ways in which GenAI is being used to perpetrate fraud. As it is, this is only the beginning of the GenAI fraud journey, with its full parameters impossible to determine.

Fighting fire with fire

While GenAI can help empower malicious actors to carry out fraudulent activities, that same technology can also be used to help detect and prevent the very fraud it assists perpetrate – a world away from traditional fraud detection and prevention approaches.

“GenAI, when applied to fraud detection, operates differently and in a more advanced way than the traditional, rule-based approaches,” states Turing in its 2024 ‘The role of generative AI in fraud detection and prevention’ analysis. “It uses advanced algorithms, such as deep learning techniques, to not only identify existing patterns of fraud but also to predict and generate potential fraudulent scenarios before they occur.”

According to Turing, there are six key areas where GenAI can help organisations to detect Gen-enabled fraud, as outlined below.
First, data augmentation. Often, the instances of fraud within available datasets are scarce or do not fully represent the myriad ways fraud can manifest, particularly with novel or evolving schemes. Data augmentation tackles this issue by using algorithms to generate additional, synthetic data that mimics authentic fraudulent and non-fraudulent transactions or behaviours.

Second, reducing false positives. False positives occur when legitimate transactions are incorrectly flagged as fraudulent, leading to unnecessary investigations and potentially blocking genuine customer activities. GenAI addresses this issue by leveraging its advanced learning algorithms to more accurately distinguish between fraudulent and legitimate transactions.

Third, adaptive learning. Adaptive learning enhances fraud detection and prevention by enabling systems to evolve in response to new information and emerging threats. Unlike static, rule-based approaches that remain unchanged until manually updated, GenAI models process incoming data in real time, learning from new transactions, behaviours and fraud patterns as they occur.

Fourth, real-time data analysis. Real-time data analysis identifies and mitigates fraudulent actions before they can inflict financial damage or data breaches. Traditional methods, which often rely on batch processing, can lead to delays in detecting fraud, while GenAI’s real-time analysis allows for the immediate flagging and investigation of suspicious activities that halt fraudulent transactions in their tracks.

Fifth, behavioural analysis. Behavioural analysis observes and learns from the nuanced behaviours exhibited by users during their interactions with digital platforms. This method is critical in fraud detection and prevention as it allows for the identification of subtle, irregular patterns that may indicate fraudulent activity, which traditional rule-based systems might overlook.

Lastly, threat intelligence. Threat intelligence enhances fraud detection and prevention strategies by integrating insights about new and emerging threats into the system’s decision-making process. By leveraging a vast array of data sources including industry reports, security bulletins and real-time incident data, GenAI can learn about the latest fraud schemes and adapt its models accordingly.

“Theoretically, GenAI should decrease investigation times and allow firms to block accounts much quicker, limiting the funds lost and improving outcomes for victims,” affirms Joanne McNaul, senior director of financial crime risk management at K2 Integrity. “In the UK, the Public Sector Fraud Authority has cautiously welcomed the use of GenAI and its ability to use large language models to interrogate vast amounts of complex information, while the Serious Fraud Office is already using the technology to expedite investigations.”

Another approach is proposed by Experian in its 2024 guidance ‘Machine learning and AI in fraud detection’, which advocates the creation of generative adversarial networks to train a ‘discriminator’ GenAI network to better identify synthetic data. “Algorithms can be trained to spot inconsistent facial movements or features, flickering lights and audio discrepancies in deepfakes,” adds Ms McNaul. “GenAI can also be used to pull data from existing tools and systems to give a more comprehensive view of red flags indicating potentially fraudulent behaviour.”

In the experience of Mr Mahna, significant improvement is also being seen at the computing endpoint. “GenAI has made great strides on finding ransomware on a computer by way of following patterns of fraudster behaviour and predicting how a malicious process might proceed,” he notes. “This predictive nature can at a minimum alert in advance of a possible improper action or at a maximum prevent it entirely, all without the user being involved.”

Enhancing and refining

As GenAI continues to drive a transformative shift in how cyber fraud is being fought, organisations need to tailor their approach accordingly, such as enhancing their ability to identify novel frauds patterns, and refining extant systems to engage emerging threats.

“Organisations should find external experts to help them establish trends and improve their systems to combat current benchmarks,” suggests Mr Mahna. “For example, if the trend is more toward email fraud, enhancing user training, dark web scans and enhanced email pre-filtering may be more appropriate and a better use of budgets.

“Bad actors are ultimately looking for money and their GenAI tools, however creative they may be, are all focused on a financial reward,” he continues. “By incorporating higher quality internal controls on cash disbursement, an organisation can help mitigate downside risk.”

Organisations may also need to consider whether to ‘buy or build’ or overlay existing systems with a new GenAI solution. “Anomaly detection models can help organisations to detect ‘novel’ fraud scenarios that deviate from established norms, such as unusual patterns or outliers in transaction data,” explains Ms McNaul. “Ideally, this should be based on behavioural analysis and scenarios, such as biometric data points, transaction frequency, spending habits and geolocation data.

“Organisations can build ‘reinforcement’ learning into their existing or new GenAI solution to adapt fraud detection rules dynamically, based on real-time feedback,” she continues. “Building in external data, such as IP reputation databases, will enrich the dataset and improve model performance. Collaboration is key – fraud analysts, data scientists, IT, risk professionals and frontline business units should share insights and learnings to promote awareness and transparency, so that rules and thresholds can be refined in a timely manner.”

Friend or foe?

Although many organisations are still researching or experimenting with GenAI technology, the majority, if not all, recognise the ‘friend or foe’ dynamic this technology plays in fraud perpetration and detection. Its myriad applications and solutions each carry offensive and defensive capabilities.

“GenAI can be a ‘friend’ when it is widely understood, regularly monitored and tested via robust model validation controls,” suggests Ms McNaul. “Appropriate governance is also fundamental to making sure that its performance is regularly reported. Training is key for all staff involved in any of the processes, as well as the departments that use GenAI to support their ‘day job’.

“As GenAI relies on huge amounts of data, it can be vulnerable to cyber attacks,” she continues. “In order to prevent GenAI becoming a ‘foe’, businesses must have accurate data, robust cyber security, information security and data privacy processes in place to counteract the threat – all of which should be subject to formal governance, control testing and robust oversight.”

As GenAI continues to be leveraged by bad actors in increasingly sophisticated fraud attempts, organisations not only need to be aware of regulatory change in this space and the obligations this creates, but must also recognise that GenAI is now firmly positioned as a critical component in the future of GenAI-enabled fraud detection and prevention – an antidote to its own threats and complexities.

© Financier Worldwide


BY

Fraser Tennant


©2001-2024 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.