Why the metaverse and AI are a double-edged sword
October 2022 | SPECIAL REPORT: FINANCIAL SERVICES
Financier Worldwide Magazine
October 2022 Issue
Technology has transformed financial services over the last 20 years. The disappearance of bank branches from high streets speaks to that. It is also unfortunately clear that the move online brings new challenges in fraud prevention.
This article looks at how technology both assists in dealing with these challenges and adds new types of risk, as online business models continue to evolve in ways that touch financial services.
Among the earliest practical business uses of artificial intelligence (AI) was assisting fraud prevention in financial services. AI systems are exceptionally good at pattern recognition tasks. This extends to identifying things that do not fit the expected pattern of information presented to them. Over the last 10 years there have been huge improvements in AI, both the technologies used and the application to new tasks. Alongside this technical development, the finance world has substantially expanded its boundaries.
The rise of financial technology (FinTech) has led to two big changes. First, the number of financial businesses has increased from high street banks and building societies, to a wide range of digital businesses holding various types of licences to do finance business. Second, the product range has widened from accounts and lending to apps and websites offering a range of new and niche products.
The latest trend is embedded finance. Embedded finance is a generic term for financing made available as part of a non-financial transaction. In the past, a customer buying a bicycle, for example, and wishing to fund the purchase with a loan would go to the bank to fill in a loan application form and wait for funds to be made available. Today, the bike merchant’s website offers a loan as part of its payments process webpages. There are far more examples of embedded finance than ‘buy now pay later’ though; insurance products are another large category.
Financial products offered in real time mean that fraud prevention must be determined without human input. This leads to the use of AI and algorithms of other types. To be clear on terms: algorithms enable decisions to be taken automatically. AI does this too, with the addition that its algorithms update over time, again automatically.
AI and algorithms give rise to challenges. It is not unusual for a firm to be operating with algorithms and AI which are embedded in software, and that have been procured and not developed by the firm. This means that the first task for a business is to map exactly where and how it uses AI in the software.
AI must be continually monitored. The self-learning aspects of the software mean that its response to the same set of facts will be different over time. This should be a commercial benefit to the business, but it adds work from a risk and compliance perspective. Algorithms and AI are highly sensitive to the data to which they are exposed. To get accurate results, clean data must be fed in. To get unbiased and non-discriminatory results, adjusted data must be fed in.
The combination of these challenges means that financial institutions (FIs) need to have a structured and clear approach to algorithms and AI. If it does more harm than good, AI should be stripped out of decision making.
A very basic plan would look something like this. First, know and log where algorithms and AI are operating already in the business’ systems. Know and log where algorithms operate in any software newly developed, acquired or licensed by the business. Second, algorithms and AI should be put through a regular auditing process. This service is usually provided by specialist technology providers and lawyers working together. Auditing in this context is a technical specialism. Broadly, it involves establishing and reporting on the model used by the system, as well as its learning procedure, its objectives, the parameters it operates in, input data and output data. Some systems are fully testable (‘white box’). Others are opaque in many respects (‘black box’). The latter are likely to be inappropriate for any individual consumer-facing financial firm. Third, an institution should have a policy relating to AI ethics. Technology ethics require technology to be fair, accountable and transparent, to enhance rather than reduce privacy, and to be created on a responsible basis allowing for the interests not just of the business, or even the business and its customers, but also of wider society.
Failure to get these things right can have serious consequences. The laws dealing with these topics are coming onto the books. From January 2023, in New York City, use of ‘automated employment decision tools’ in making employment decisions will be regulated. More will follow in other countries. Existing laws relating to mis-selling, discrimination and misuse of data have severe penalties. In addition, the reputational risk of poor or biased decisions can be far greater than the cost of individual claims. In addition to all of this, the UK Online Safety Bill is working its way through parliament and will make online businesses, including all financial businesses, take a level of responsibility for user-generated content. This becomes acutely relevant as communities form and develop.
The challenge of the ‘metaverse’
There is currently no globally accepted definition for what constitutes the ‘metaverse’. It can be thought of as a digital space in which we can socialise and work or participate in creative and immersive experiences. It is often described as the future vision of the internet, which moves away from the service we know today toward immersive 3D. This vision of the metaverse can be experienced via virtual reality (VR) headsets which plunge users into a 360-degree view of the digital world. Alternatively, it can be seen as a 2D overlay on our physical world via augmented reality (AR) displays and visited through desktop computers, or even through futuristic devices such as haptic suits and 360-degree treadmills.
While the debate may be ongoing on exactly what the ‘metaverse’ means, we are already seeing many exciting opportunities for individuals and businesses. In May, Spotify planted its flag as the first music streaming service within the Roblox metaverse, with the aim of engaging users with a space to create music, socialise and get access to exclusive virtual merchandise. To democratise access to their heritage art collection, the Vatican has announced a partnership with metaverse developer Sensorium to create the first-ever VR and non-fungible token (NFT) gallery hosting Vatican art.
This is all underpinned by growing interest across metaverse platforms, with The Sandbox welcoming over 290,000 players in its season two game and reaching 700,000 players through the social contests. Similarly, Decentraland has seen increased user activity and almost 600,000 metaverse visits in July alone. While this may pale in comparison to Facebook’s 1.62 billion daily active users, it signifies a huge interest in using metaverses. Notably Second Life, considered to be an OG metaverse, still sees between 30,000 and 50,000 daily active users despite being created in 2003 and having limited functionality and graphics resolution.
As well as companies and individuals looking to embrace the metaverse, there are also several innovation-friendly governments and jurisdictions that are making metaverse moves. Dubai has put the metaverse at the centre of its innovation ambitions and expects it to add 40,000 new jobs and $4bn to the economy by 2027. It has already purchased land in The Sandbox metaverse for a virtual embassy. This follows a similar announcement from the Barbados government in December 2021. South Korea has already invested $177m into metaverse-related projects via its new state programme, Digital New Deal. This will form Seoul’s ‘Metaverse 120 Centre’ and offer residents the ability to meet public officials – although in avatar-to-avatar form.
As with any innovation, criminals are already turning their attention to how they can use the metaverse for illicit purposes. Our recent released report, ‘The Future of Financial Crime in the Metaverse’, details a number of cases where metaverse-related assets are being utilised to launder the proceeds of crime. The most prevalent metaverse crime is within the scam and fraud categories. Illicit actors are looking to dupe unsuspecting metaverse visitors into clicking malicious links so that they can swipe their crypto funds, with nefarious players impersonating support staff for fake metaverse projects. In addition, there is a risk for future typologies which includes 3D phishing attacks. This involves malicious actors copying the avatar of someone to steal information or access, and faking land expansions or metaverse assets to dupe users into buying replica assets.
The good news is that crypto ecosystem participants can be armed with knowledge about these typologies and utilise blockchain analytics tools to help protect against bad actors.
There are still open questions about how to tackle metaverse-related crime. These include the debate about online and financial privacy versus the need to discover and stop illicit activity. There are also ethical questions about open versus closed metaverses, how existing rights can be preserved in the metaverse, and how we can learn lessons from failures in ‘web2’ to make better decisions at the early design and build stages of the metaverse. This is especially relevant when we consider the prevalence of trolling across social media and the impact this is having on young people’s mental health or the challenge of tools such as WhatsApp being used to share lifesaving information in warzones, but also being used by terrorist organisations. The metaverse carries the same risk of being a morally agnostic platform which could be used for ‘good’ or ‘evil’, and it will be up to crypto ecosystem participants to discuss how best to design it. Fortunately, many of these conversations are already happening, with forums being created to discuss interoperability and compliance.
While the metaverse affords all sorts of exciting opportunities there are also developing risks of which we must be cognisant. The challenge is to understand how the benefits can be harnessed without repeating the same mistakes of ‘web2’ or opening new technological avenues for criminals to exploit.
Charles Kerrigan is a partner at CMS and Tara Annison is head of Technical Crypto Advisory at Elliptic. Mr Kerrigan can be contacted on +44 (0)20 7067 3437 or by email: charles.kerrigan@cms-cmno.com. Ms Annison can be contacted by email: tara.annison@elliptic.co.
© Financier Worldwide
BY
Charles Kerrigan
CMS
Tara Annison
Elliptic
Q&A: Managing AI in the financial services sector
Facing unexpected events in the US financial markets: advice for regulated and unregulated entities
US SEC updates regulatory strategic plan – global capital markets impact
The EU’s Payment Services Directive: following a road well-travelled
The Financial Services and Markets Bill: a mixed bag of regulatory reform
A new era for moveable transactions in Scotland
The outsized impact of blockchain on finance
The regulation of cryptoassets: EU agrees new regulatory framework
Why the metaverse and AI are a double-edged sword
Stakes deals – investments in alternative asset managers