Impact of the EU Artificial Intelligence Act
October 2023 | TALKINGPOINT | RISK MANAGEMENT
Financier Worldwide Magazine
October 2023 Issue
FW discusses the impact of the EU Artificial Intelligence Act with Tim Wright, Nathan Evans, Kate Troup, Eddie Powell and Caroline Philipps at Fladgate LLP.
FW: Could you provide an overview of the scope of the European Union’s proposed Artificial Intelligence (EU AI) Act? What were the main factors which led to the EU seeking to specifically target the regulation of artificial intelligence (AI)?
Wright: Introduced by the European Commission (EC) in April 2021, the proposed Regulation of the European Parliament and of the Council laying down harmonised rules for Artificial Intelligence, known as the EU AI Act, will govern the development and use of AI across the EU, introducing a common regulatory and legal framework for AI. The EC cited various factors in its decision to propose the Act, including the growing importance of AI in the global economy, emerging use cases in sensitive areas such as healthcare and law enforcement, and the need to ensure that AI is developed and used in a way that is safe, ethical and accountable. High-profile cases, such as the Cambridge Analytica scandal and the rise of ‘deepfakes’, as well as the deployment, and quick withdrawal, by Microsoft of a racist, toxic chatbot called ‘Tay’ highlighted the need for regulation. More guardrails were clearly needed. There are also economic motivations – the EU wants to take a leading role setting baseline standards around AI trustworthiness and ethics, in the belief that it will benefit from a larger share of the global AI economy.
Evans: At this moment, the EU AI Act is still under negotiation, with a ‘compromise text’ approved by the European parliament forming the basis for discussions with other EU institutions and member states. The first, less controversial parts of the legislation have already been cleared, with the regulation expected to be finalised by the end of the year or early 2024 at the latest. When passed, the EU AI Act will be the first comprehensive regulation of AI in the world. In addition, the EU is also bringing into force a new Artificial Intelligence Liability Directive, as well as an amendment to the EU Product Liability Directive, which will complement the EU AI Act by addressing civil liability for AI systems, providing remedy and redress for individuals if things go wrong.
Powell: Although this is EU legislation, the AI Act will have a degree of extraterritorial effect, such as where companies outside the EU sell their AI systems to businesses in the EU or make their AI systems available to users in the EU. For example, if a company in the US develops a platform which uses AI to make decisions about applications for financial products made by EU consumers, such as credit scoring apps which will be categorised as high risk, the Act will apply even if that US company does not have a presence within the EU. As a result, producers of AI systems outside the EU will need to take account of the risks of their AI systems being used in the EU, even if their AI systems are not intentionally placed on the EU market or targeted at users in the EU, not least because of the risk of huge penalties for non-compliance. They will need to appoint an EU-based authorised representative for the purposes of the Act unless they have an EU importer.
FW: Drilling down, what applications, systems and processes are targeted by the EU AI Act? To what extent does the EU AI Act impose onerous compliance obligations on companies that use AI systems, particularly those classified as high-risk? And how is generative AI (GenAI) treated in the Act?
Wright: The Act seeks to adopt a future-proofed, technological neutral definition of AI, leveraging the definition used by the Organisation for Economic Co-operation and Development (OECD) and the US National Institute of Standards & Technology (NIST) AI Governance Framework, covering every sector, except AI developed solely for military use. The EU AI Act targets producers and their importers and distributors, and, to a lesser degree, deployers of AI, adopting a risk-based approach, with systems grouped into four risk categories: unacceptable, high, medium, and low or no risk. Unacceptable AI applications, such as social scoring systems, will be prohibited, while producers of high-risk systems will face the most significant compliance requirements. These include systems used in safety-critical applications, critical infrastructure and decision-making systems. These systems will need to meet requirements around data and data governance, documentation and transparency, human oversight, robustness and accuracy. In addition, producers of high-risk AI systems will have to meet registration, conformity assessment, safety and quality assurance requirements prior to placing their systems in the market and must undertake regular monitoring and reviews, report serious incidents or malfunctions, conduct testing and updates, and keep records and documentation.
Troup: The compromise text of the EU AI Act includes several provisions that specifically address generative AI (GenAI). These provisions were introduced at a relatively late stage in the wake of the release by OpenAI of ChatGPT, which was quickly followed by a slew of other GenAI systems such as Google’s Bard and Anthropic’s Claude. The Act defines GenAI systems as machine learning models that can generate content, such as text, video or images, in a human-like manner. This raises the possibility that GenAI systems could be used to create harmful or dangerous content, such as deepfakes or hate speech. So, as well as the rules for high-risk systems, where they apply, producers of GenAI will have to notify users where content is artificially generated, as well as documenting the compliance, transparency and accuracy of their models. Where GenAI is used by financial services firms, such as chat interfaces capable of providing financial advice, as well as enhanced risk modelling and forecasting systems, they will also need to consider relevant financial services regulations and guidance from applicable regulators.
FW: How has the EU AI Act generally been received? In what areas has it garnered support and criticism?
Evans: The EU AI Act has been generally well-received by stakeholders, given the unquestionable potential for AI to be used by bad actors for seriously harmful purposes, although the Act does have its critics. Naturally, the EU’s institutions have been among its biggest supporters, arguing that specific regulation is needed to protect the rights and safety of its citizens, with the parliament calling the Act “a landmark piece of legislation” that will “set global standards for the development and use of artificial intelligence” and the EC lauding it as necessary to “ensure that AI is developed and used in a way that is safe, ethical, and beneficial to society”. However, commentators have criticised the Act for being overly complex, prescriptive and burdensome, and noted that it risks stifling innovation, with EU policymakers driven more by public anxiety around AI than evidence-based risk analysis. Others have suggested that it is overly ambitious given fragmented governance structures and approaches to AI that currently exist across the 27 member states, and that companies developing and implementing such systems would face disproportionate costs and liability risks. Significant obstacles stand in the way, and it remains to be seen if the Act can successfully achieve the desired balance between the safe use of AI without stifling innovation and driving investment elsewhere. Another issue is a perceived lack of global coordination as, naturally, the AI Act is focused on the deployment and use of AI within the EU. Without broader global cooperation, even well-intentioned guidelines risk being circumvented.
Philipps: In the field of recruitment, there had been some expectation that the use of AI would help avoid the unconscious bias of human decision makers. However, current AI use has led to a range of concerns, such as discrimination and bias, a lack of privacy and a lack of accountability. For example, AI recruitment systems have been found to inadvertently discriminate against certain groups of people, such as women, people with disabilities and ethnic minorities. When thinking about the workplace, and in particular the relationship between employers and their personnel, in addition to concerns such as discrimination and bias which may be inbuilt in performance management and other HR systems, critics have voiced concerns that AI might lead to job displacement and deskilling, hollowing out human resource (HR) functions and especially soft skills and capabilities. However, the AI Act will not address these issues.
FW: What improvements, if any, do you feel could be made to the legislation? Are there any loopholes or exceptions in the law that you believe should be tightened such as, for example, where AI is used in recruitment or in the workplace?
Wright: One area which might get more attention from policymakers is innovation. The European parliament’s compromise text mandates that each member state must have a regulatory sandbox – a controlled environment where companies can experiment under the supervision of a public authority. Some member states have suggested that sandboxes should be able to be put in place jointly with other member states or have the obligation fulfilled by joining a sandbox at the EU level. To incentivise participation in these sandboxes, AI developers could benefit from a presumption of conformity which will be required for high-risk systems, with sandbox exit reports included in the declaration of conformity, as well as the inclusion of notified bodies in sandboxes to streamline the conformity process and of stricter safeguard testing carried out in real-world conditions.
Philipps: The UK’s approach to regulating AI, by contrast, appears at first blush to be far more permissive and pro-business innovation, aiming to improve public trust in AI and to develop AI capabilities. However, opportunity rarely comes without its challenges. The increased use of AI in recruitment processes and employee surveillance is increasing anxiety among employees in many industries, where a lack of transparency about how decisions are made is arguably contributing to a distrustful and demotivated workforce. Algorithms that determine performance targets may not consider that a particular employee’s disability makes it much harder for them to hit those targets and the employee is then unfairly and unlawfully penalised as a result. Such complex and nuanced issues may be difficult to deal with under the EU AI Act. Therefore, it will be important to ensure that the provisions of the AI Act are balanced against workers’ existing rights. For example, in the UK, the Equality Act 2010 prohibits discrimination in relation to a range of protected characteristics, including disability, and the Human Rights Act 1998 protects an individual’s right to respect for private and family life.
FW: Is the EU AI Act likely to have an impact internationally, influencing the approach of regulations elsewhere? Could it become a global standard similar to the EU’s General Data Protection Regulation (GDPR)?
Powell: The EU is a major player in the global economy, and its regulations often serve as a model for other countries. The so called ‘Brussels Effect’ is likely to be a significant factor in the global adoption of the EU AI Act – we already see this in other fields such as environmental protection, Apple’s worldwide adoption of USB-C charging leads, and food safety, and notably in the field of data protection and privacy with the General Data Protection Regulation (GDPR), where its extraterritorial reach influenced data protection regulations worldwide, creating a global tightening of consumer data protection. The big difference between this and GDPR is the fact that GDPR drove other countries to improve their privacy laws by providing for mutual recognition to ease international data flows with recognised countries. This is much less a feature of the EU AI Act. However, it is also worth remembering that, in addition to the obligations in the AI Act, a raft of other laws and regulations such as copyright and equal rights and non-discrimination, as well as data protection and privacy, already apply to the development and use of AI.
Evans: The EU AI Act could potentially have an impact internationally, serving as a template for other regulators, influencing the development of AI and its use cases, promoting global policy discussion, and establishing a framework for the protection of individuals. However, there are limits, with other jurisdictions including the UK and the US each looking to plough their own furrows as they seek to balance a desire for innovation with balanced policies and principles for safe and trustworthy AI. Examples include the US’s Blueprint for an AI Bill of Rights and the UK’s pro-innovation AI white paper, while China has drafted rules requiring chatbot-makers to comply with Chinese state censorship laws. There is also a proposal for a US-EU ‘AI Code of Conduct’, which is intended to provide a first step toward transatlantic foundations for AI governance.
Troup: Financial institutions (FIs) that use, develop or procure AI systems should evaluate the potential applicability of the EU AI Act, regardless of where they are located or established, due to the extraterritorial effect. The AI Act can apply to providers that place AI systems on the market or put them into service within the EU, as well as providers and users of AI systems that are physically present or established in a third country, where the output produced by the system is used in the EU. Therefore, the scope of the AI Act extends beyond the EU, and FIs established outside the EU will still need to take appropriate precautions to comply with the forthcoming legislation. This means that they may need to consider adopting a common set of AI systems that comply with the EU AI Act or using different AI systems in different jurisdictions.
FW: With an expected grace period of 24 months from the date that the EU AI Act comes into force, what advice would you offer to companies on preparing for compliance with the new rules?
Wright: The EU AI Act is a significant and complex piece of legislation. Companies should start now by gaining a good understanding of its requirements, which will vary, depending on a company’s role in the AI value chain, as well as the risk profile of the relevant AI system. Key steps include educating and gaining board level and stakeholder buy-in, establishing a cross-functional team, programme management and engaging expert resources, performing a comprehensive assessment of the scope and risk profile of current and planned AI systems, implementing training, preparing and implementing policies and procedures, developing and implementing a compliance plan, designating responsible individuals and establishing governance, change management, reporting and oversight, maintaining comprehensive records and documentation, and performing ongoing monitoring.
Powell: Companies will be able to leverage aspects of their GDPR compliance programmes to help meet requirements under the EU AI Act, such as data governance frameworks. Policies and processes for managing personal data established under the GDPR can provide a foundation for responsible data use and oversight when developing or deploying AI systems. Built-in checks and controls help minimise risks that the Act aims to address. In addition, companies can leverage existing approaches for privacy impact assessments. For instance, methodologies for assessing data privacy risks can be adapted to evaluate AI risks, considering things like use cases, training data and impact on rights, and for information audits. Furthermore, regular reviews of data and systems under the GDPR aid visibility into how information moves through an organisation, highlighting risks and making the ‘unknown unknowns’ known. The same approach can uncover issues in AI development or integration that the regulations aim to address early on. Also, maintaining records of data processing activities under the GDPR, including purposes and security controls, provides documentation that can help demonstrate AI governance, oversight and risk mitigation measures, with an emphasis on accountability and record-keeping.
FW: Going forward, how do you expect the EU AI Act will affect developers and operators of AI systems over the short, medium and long term? What predictions would you make about its potential ability to shape the direction of AI in the EU?
Troup: The text of the EU AI Act is still under negotiation, so it is possible that the obligations of producers and users of AI systems will be changed by the time the Act comes into effect. However, based on the adopted negotiation positions and the progress already made, wholesale changes are not expected. But judging from the progression of the compromise text, we see the Act having the potential to affect developers and operators of AI systems. For example, producers of AI deployed in the financial services sector will need to ensure their systems meet the new framework, including conducting risk assessments and ensuring transparency and explainability. They will also need to ensure that their AI systems will enable financial services firms to comply with the additional rules and regulations which are likely to be imposed by the financial services regulators because of the EU AI Act.
Wright: In the short term, potential impacts might include increased compliance costs, as the AI Act will impose a raft of new compliance obligations, as well as delays in the development and deployment of AI systems as developers and operators take more time than ever to ensure systems comply, potentially slowing down the pace of innovation in the AI sector. Inevitably there will also be increased scrutiny from regulators. However, in the medium and longer term, we expect to see benefits such as a shift in the way that AI systems are developed, tested, documented and deployed, with an increased focus on safe, trustworthy and responsible AI, as well as increased investment in AI safety research. The Act will also lead to the development by the European Standardization Organizations of harmonised standards for safe and trustworthy AI, which could stimulate the emergence of a new global ethical framework for AI.
Tim Wright is a technology, sourcing and commercial lawyer at Fladgate where he heads up the firm’s technology sector group. With over 30 years’ experience, his practice is focused on advising clients on their outsourcing, cloud, digital, technology and other commercial projects. He also works on a range of advisory and compliance projects, including supply chain risk, including modern slavery, conflict minerals and anti-bribery, crisis and continuity management, GDPR and due diligence. He can be contacted on +44 (0)7903 349 701 or by email: twright@fladgate.com.
Nathan Evans is an IT and outsourcing lawyer with over 10 years’ experience advising clients on strategic and complex projects including systems, platform and application build projects, networking, infrastructure and cloud services arrangements, terms of sale/purchase, including hardware, components and peripherals, integrator services agreements, such as Dynamics 365, managed services and other digital transformation projects. He also advises on all aspects of software development and licensing, including agile delivery, open source and blockchain enabled platforms. He can be contacted on +44 (0)7908 639 801 or by email: nevans@fladgate.com.
Kate Troup is a financial services regulatory lawyer with expertise in the investment management, private banking, FinTech and cryptoasset sectors. She advises both UK-based firms and overseas firms wishing to provide services to the UK market. Her advice covers the full lifecycle of a regulated firm, often starting with advising firms about whether they require FCA or PRA authorisation and then advising them on their conduct of business requirements, internal obligations and obligations owed to their regulator. She can be contacted on +44 (0)7507 480 999 or by email: ktroup@fladgate.com.
Eddie Powell has over 30 years’ experience, initially starting out as a disputes lawyer before specialising in commercial legal issues, such as IP, commercial contracts, GDPR compliance, e-commerce terms, consumer terms and technology contracts. He enjoys helping clients to achieve their business plans, delivering his advice with a healthy dose of pragmatism and ‘gut feel’. He can be contacted on +44 (0)7852 040 590 or by email: epowell@fladgate.com.
Caroline Philipps specialises in employment law, advising on both contentious and non-contentious issues, ranging from contractual negotiations to redundancies, dismissals and negotiated exits. She also acts for clients during employment tribunal proceedings, particularly in relation to discrimination claims. Clients often include in-house counsel and HR teams in larger organisations, as well as founders and partners in entrepreneurial start-ups. She can be contacted on +44 (0)7980 893 884 or by email: cphilipps@fladgate.com.
© Financier Worldwide