The EU AI Act: implications for financial services

July 2024  |  FEATURE | RISK MANAGEMENT

Financier Worldwide Magazine

July 2024 Issue


Newly enacted legislation is often hailed as a ‘game changer’ in its particular field. While many such claims are overblown, others, such as the European Union’s (EU’s) 2018 General Data Protection Regulation (GDPR) and, more recently, the EU’s Artificial Intelligence (AI) Act, undoubtedly have merit.

The latter is the world’s first comprehensive piece of legislation on AI. It aims to foster trustworthy AI in Europe and beyond by providing developers and deployers with clear requirements and obligations regarding specific uses of AI. At the same time, the AI Act seeks to reduce administrative and financial burdens for businesses.

The regulation is also part of a wider EU package of policy measures to support the development of trustworthy AI (which includes the AI Innovation Package and the Coordinated Plan on AI). Together, these measures are designed to guarantee the safety and fundamental rights of people and businesses when it comes to AI.

“We finally have the world’s first binding law on AI, to reduce risks, create opportunities, combat discrimination and bring transparency,” said Brando Benifei, a member of the European Parliament (MEP) and Internal Market Committee co-rapporteur, upon approval of the regulation. “Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected.”

Agreed in negotiations with EU member states in December 2023, the AI Act was endorsed by MEPs (with 523 votes in favour, 46 against and 49 abstentions) and came into force on 13 March 2024.

“The EU has linked the concept of AI to the fundamental values that form the basis of our societies,” noted Dragos Tudorache, MEP and Civil Liberties Committee co-rapporteur. “The AI Act is a starting point for a new model of governance built around technology. We must now focus on putting this law into practice.”

As businesses adapt to the requirements of the AI Act amid an evolving technological landscape, a pivotal question has emerged: how will the regulation impact heavily monitored sectors that already face high compliance and safety needs, such as financial services (FS)?

Implications for FS

Although the AI Act is ‘horizontal legislation’ in that it applies to all industry sectors, it is expected to have particular resonance for FS, given that financial institutions (FIs) are more likely to use some of the classes of software that are subject to the new regulation.

“FIs are using increasingly intelligent systems to enhance both the customer experience and improve efficiencies,” says Emma Wright, a partner at Harbottle & Lewis. “Some of the functions carried out by FIs are considered high-risk, and additional care, governance and compliance controls will need to be built in as the fines for non-compliance are significant.”

However, despite the ‘high risk’ functions that FIs perform, the AI Act contains no explicit references to FS use cases and only certain aspects of the regulation have specific implications for FIs.

While AI systems deemed to be of an unacceptably high risk are banned outright, high risk AI systems which pose a significant risk are permitted but subject to stringent obligations, including awareness of automation risks and fundamental rights impact assessments (the European Commission (EC) can amend the list of use cases it considers high risk).

With its aim of fostering trustworthy AI in Europe and beyond by ensuring that AI systems respect fundamental rights, safety and ethical principles, the AI Act is, in many ways, a GDPR for AI.

“The issue of high risk AI systems referred to in article 6 of the AI Act is one of the most important aspects regarding the impact of the regulation on the activities of FIs,” says Hubert Łączkowski, a  junior associate at Traple Konarski Podrecki & Partners. “These are AI systems that may cause significant harm to health, safety, fundamental rights, the environment, democracy or the rule of law.”

Such systems may be used for assessing the creditworthiness of natural persons or establishing their credit score, as well as AI systems intended for use in risk assessment and pricing of individuals in the case of life and health insurance.

“FIs will also be obliged to verify if systems they use are included in the list of those prohibited under article 5 of the AI Act,” adds Mr Łączkowski.

The impact of the AI Act on FS will depend on its specific use. Credit scoring models and insurance risk assessments are classed as high-risk uses of AI. This is because these can determine an individual’s access to financial resources and restrict their ability to obtain loans (with serious financial exclusions and impacts). “Credit scoring systems which use AI will be subject to the high-risk requirements set out by the Act,” confirms Ms Wright.

In addition, other more general AI tools which are deployed by an FI may also have governance requirements under the AI Act. For example, AI-powered customer service chatbots are classed as limited risk and will have obligations – although to a lesser extent than high-risk uses – such as transparency standards and awareness that they are communicating with AI.

GDPR overlaps

With its aim of fostering trustworthy AI in Europe and beyond by ensuring that AI systems respect fundamental rights, safety and ethical principles, the AI Act is, in many ways, a GDPR for AI. However, given the similarities in principles between the two regulations, such as lawfulness, fairness, transparency, accuracy, data minimisation and accountability, the potential for overlaps is substantial.

“In principle, the AI Act was designed to avoid the risk of overlapping regulations, as indicated in recital 158 of the preamble to the AI Act, which directly refers to FIs,” notes Mr Łączkowski. “Indeed, some articles have been designed with exceptions according to which certain obligations imposed by the AI Act will be considered fulfilled if the FI meets a similar obligation, such as risk management or monitoring obligations, that must be complied with pursuant to the relevant financial service law.

“Due to the fact that the AI Act does not constitute special provision or supplementation to the GDPR, in some cases the overlapping may be noticed,” he continues. “For example, the use of prohibited AI systems under article 5 of the AI Act may also constitute a violation of the principles regarding automated processing referred to in article 22 of the GDPR.”

FIs also need to be aware that, given the relationship between the AI Act and the GDPR, there is potential for double imposition of administrative penalties in the event of simultaneous violations of both regulations’ provisions.

Compliance and penalties

As a primary legislative source that all within its scope are required to follow, the AI Act presents complex compliance challenges for FIs deploying technologies for the purpose of providing their services, and particularly for those that rely on AI systems designated as high risk.

“FIs should firstly check whether the technological solutions they use align with the meaning of ‘AI System’ under article 3 of the Act,” says Mr Łączkowski. “It is then necessary to determine whether such systems fall within the catalogue of prohibited practices under article 5 of the AI Act or whether they constitute a high risk AI system. Based on these arrangements, FIs need to implement the relevant requirements of the regulation, supervised by competent authorities at both the EU and national level.”

These competent authorities include the European Artificial Intelligence Office – established within the EC as the centre of AI expertise and the foundation for a single European AI governance system – which is mandated to promote an AI ecosystem and fulfil an advisory role.

According to the EC, the AI Office is “uniquely equipped” to support the EU’s approach to AI and will play a key role in implementing the AI Act by supporting governance bodies in member states in their tasks by issuing opinions and recommendations.

In terms of potential penalties, this support could prove something of a financial safety net for FIs, as non-compliance with the prohibition of the AI practices referred to in article 5 of the regulation are, depending on the severity of the infringement under discussion, subject to significant administrative fines.

“For example, prohibited practices or non-compliance related to data requirements on data carry a penalty of up to €35m or 7 percent of total worldwide annual turnover, whichever is higher,” says Ms Wright. “In terms of non-compliance with any of the Act’s other requirements, the penalty is up to €15m or 3 percent of total worldwide annual turnover, whichever is higher. And for the supply of incorrect, incomplete or misleading information to authorities, the penalty is up to €7.5m or 1.5 percent of total worldwide annual turnover, whichever is higher.”

A global standard

Alongside the global influence of the GDPR, the AI Act is poised to become a universal benchmark in how AI is developed and used. It is a dynamic model designed to provide greater transparency and set parameters for high-risk technologies, as well as to deliver a comprehensive strategy to regulate AI applications while fostering ethical innovation.

“The dynamic development of AI definitely requires regulation, especially in the context of its use by companies from the most important sectors of the economy, such as financial markets,” opines Mr Łączkowski. “Thus, the direction toward regulating AI under the AI Act is much needed and long overdue. At the same time, the regulation will certainly need to be supplemented by delegated acts and the guidelines of supervisory authorities.”

To that end, according to the EC the application of the AI Act will be staged over two years – starting with the phasing out of the prohibited systems within six months after the Act enters into force – and will require the EC to issue various implementing guidelines.

Concurrently, the EC is requesting member states to set up regulatory sandboxes that allow the testing of high risk AI systems in the real world, to facilitate the development, training, testing and validation of innovative AI systems.

“The impact of the AI Act on financial markets will be as great as it was when the GDPR came into force,” suggests Mr Łączkowski. “The AI Act will establish an international legal framework that is consistent for the entire EU market, and with regard to matters that were previously regulated only partially, or not at all.”

As the implementation of the AI Act progresses across EU member states, FIs and the FS sector as a whole must adjust their approach to AI systems and ensure they comply with the Act’s directives. This is a potentially game changing piece of legislation, and one that could redefine the ethos of the FS sector.

© Financier Worldwide


BY

Fraser Tennant


©2001-2024 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.