August 2022 Issue
The European Union (EU) has long pondered the question of artificial intelligence (AI). In April 2021, it presented its ‘Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts’, otherwise known as the Artificial Intelligence Act (AI Act).
The introduction of the AI Act is momentous for several reasons and indicates the direction of travel with respect to AI. “The AI Act is a highly anticipated piece of legislation,” notes Eoghan Doyle, a partner at Philip Lee LLP. “With the advent of Web 3.0 and the new forms of technologies that come with it, many actors in the AI sector have been following the work of the EC to understand and prepare for what is already seen as a future landmark legislation worldwide. Like with any new compliance regime, it is fair to say that businesses are concerned as to the level of resources they will need to devote to the area, the costs involved and how it might impact their business model.”
Building trust, implementing safeguards
Momentum for a legal framework to govern AI has been growing globally in recent years. “While there are numerous examples of ethical codes and guidelines that have been produced, such as the Organisation for Economic Co-operation and Development (OECD) AI principles and UNESCO’s recommendation for AI ethics, lawmakers are now grasping the nettle in their attempts to regulate this highly complex and powerful technology,” says Victoria Hordern, a partner at Taylor Wessing. “And it is not just in the EU where we are seeing regulatory movement. The US Congress is currently considering the Algorithmic Accountability Act and the Federal Trade Commission (FTC) has filed for rulemaking authority in privacy and AI.”
In the EU, according to Vincent Wellens, a partner at NautaDutilh Avocats Luxembourg, a progressive approach to potential regulation has made it possible to listen to ethics specialists and innovative companies, to understand the technology, study the reality of the risks and, above all, avoid legislation that would quickly become obsolete. “Although many politicians were afraid of the possibility of humans being replaced by machines, years of dialogue has reassured them on the state of the science and the major differences between tools based on AI,” he says. “It is also now accepted that regulation is needed to ensure legal certainty and thus facilitate investment and innovation in the field of AI in the EU.”
Among its provisions, the AI Act contains powers for oversight bodies to order the withdrawal of a commercial AI system or require that an AI model be retrained if it is deemed to be high risk. The basic tenets of the AI Act will see AI applications assigned to three risk categories. First, applications and systems that create an unacceptable risk will be banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. Finally, applications not explicitly banned or listed as high-risk are largely left unregulated.
“The AI Act is intended to be robust but also provides for flexible mechanisms that allow it to be dynamically adapted to technological developments and new situations of concern,” explains Mr Wellens. “This uniform framework should ensure that AI systems used in the EU market are safe and comply with safety requirements and existing legislation on fundamental rights. It should thus facilitate the development of a single market for trustworthy AI applications and prevent market fragmentation, particularly against the power of US and China.”
The stated goal of the AI Act is to address risks associated with AI use by developing an ecosystem of trust. “This is an ambitious aim given how rapidly technology can change, and since it assumes a certain amount of good faith among those developing AI systems,” suggests Ms Hordern. “For any new emerging technology, not far behind the buzz of the new will be, unfortunately, the old concerns of bad actors and those seeking to manipulate or cause harm. The AI Act’s ambition is like that of the General Data Protection Regulation (GDPR), as it seeks to regulate a vast layer of interactions which are only going to increase in volume and complexity as time goes on.”
For Mr Doyle, the main aim of the AI Act is to establish Europe as the central hub of trustworthy and horizontally regulated AI in the global market, boosting competition and fostering AI’s potential for excellence. “A strong emphasis is also placed by the European Council (EC) on guaranteeing fundamental rights of both natural and legal persons, security and protecting general interests,” he says. “The AI Act is an attempt to regulate AI by addressing issues such as data driven or algorithmic social scoring, remote biometric identification, and the use of AI systems in law enforcement, education and employment.”
The EU has been first out of the gate and will likely be a catalyst for the introduction of additional measures in other jurisdictions. “We can expect that it will take a long time before a final draft of the AI Act is adopted and comes into force; in the interim, we can expect that the text will change considerably,” says Charles Morgan, a partner at McCarthy Tétrault LLP. “If and when adopted, the AI Act will almost undoubtedly have extraterritorial impact, both as a source of ‘inspiration’ for other regulators around the world and as a result of the fact that creators of AI systems will want to ensure that their products and services have a global reach, and hence many will likely be motivated to adopt a ‘highest standard’ approach to compliance, with EU regulation serving once again as the ‘highest standard’.”
Criticisms
Not surprisingly, the AI Act in its current form has its critics. Tech companies and industry bodies have called for greater clarity around the definition of AI, the classification of those ‘high risk’ AI applications subject to stricter regulation or even outright bans, and detail on some of the proposed AI ‘harms’. “A significant part of the criticism directed at the AI Act relates to administrative and compliance costs that organisations anticipate they will be faced with, particularly in respect of high risk AI systems,” notes Mr Doyle. “Commentators said this could act as a deterrent against investment in the AI field and work against the EU’s objective of boosting innovation in a sector composed mainly of SMEs and start-ups.”
For ‘high risk’ applications, the current framework contains extensive obligations, but in rather vague terms. “This has the double inconvenience of making compliance costly for good-faith actors, and enforcement difficult against bad-faith actors,” argues Mr Wellens. “The grandfathering clause is also drafted in a way that could hinder the enforceability of the AI Act for several years. Although many stakeholders approved of the risk-based approach, critics centred on the core concepts of the regulation, finding that they were either conceptually too imprecise, narrow, broad or overlapping with other regulations.”
According to Mr Morgan, the AI Act has leveraged the logic, and incorporates by reference in its annexes, many elements of the European product safety and liability regime. “One of the problematic consequences of this approach is that it focuses on very specific use cases for AI rather than outcomes-based harms,” he says. “The result is a regulatory regime that either will require continuous update or a regulatory gap of harms without remedy.
“Secondly, the transparency and compliance explainability obligations under the AI Act are directed toward the enterprise user of the technology and to the regulator with supervisory authority, rather than to the individual EU subjects whose rights may be affected by the AI systems,” he continues. “The result is that such individual subjects are unlikely to understand how algorithmic decisions that affect them have been made.”
Inevitably, the AI Act has also drawn comparisons with the GDPR, particularly given its extraterritorial reach, applying the ‘place of market’ principle to determine territorial scope. “Essentially any AI system available in the EU will need to comply with the AI Act,” affirms Ms Hordern. “So, a non-EU provider placing an AI system on the EU market must comply. Providers and users of AI systems outside the EU are also covered by the AI Act if the result produced by the AI system is used in the EU. In that sense, comparisons with the GDPR’s extraterritorial reach hold water since the touchpoint is whether there is an impact on activities and individuals in the EU.”
Other criticisms include the fact that the Act does not replicate the ‘one stop shop’ system under GDPR, which may lead to concerns about consistency and cooperation between supervisory authorities across all member states. This, posits Mr Doyle, could potentially give rise to a ‘fragmented’ application of the AI Act across the EU. “The AI Act does not provide for a complaint system or direct enforcement rights for individuals,” he says. “This would appear to be a major gap in the legislation, and many would view this as incompatible with an instrument whose function is to safeguard fundamental rights. Compare that to the GDPR, where the ability of data subjects to sue and make complaints plays a key role in holding organisations to account.”
As Mr Wellens points out, whatever the outcome of the legislative process, the risk-based stratified approach may generate a decoupling of the market. “High-risk and non-high-risk AI systems may be considered essentially different assets,” he says. “Because AI regulation is often less ‘public facing’ than the GDPR, for example, we would expect it to grow and change less through litigation, and more through the development of codes of conducts and sectoral certification schemes. Soft law back-and-forth between regulatory authorities and business associations should be a crucial vector.”
Preparing for compliance
Though final agreement on the AI Act is not expected until 2023, companies can take a number of important steps to prepare for its introduction. The AI Act places technical, monitoring and compliance obligations on companies providing or using AI systems. These obligations are more or less extensive depending on the risks arising from those AI systems. Since the AI Act sets out three risk categories, a reasonable approach to compliance is for organisations to establish a robust risk management lifecycle. “When doing so, organisations should identify the AI systems they rely on and the risks such systems represent,” points out Mr Doyle. “The measures in place to mitigate such risks should also be considered in light of applicable regulations and standards, and organisations should run regular conformity assessments against the same.”
The deadline for application of the regulation has been extended from two to three years after its entry into force. During this time, the most important issue for AI providers will be determining whether their systems fall into the ‘high risk’ category, as they will need to meet the most stringent set of requirements and undergo conformity assessment procedures before they can be released on the EU market. “For some AI systems with limited risk, specific transparency obligations are imposed – for example where there is a clear risk of manipulation, such as chatbots,” says Mr Wellens. “Additional measures include the establishment of AI regulatory sandboxes to help reduce the regulatory burden.”
While attitudes toward AI are shifting, it remains a difficult space to regulate. “Trying to understand AI makes most people’s heads spin,” notes Ms Hordern. “And yet the expanding use of AI increasingly means it has an everyday impact on people’s lives. Governments have an obligation to ensure responsible use of AI and to ensure that people are aware of how the technology affects them.”
The AI Act is an important first step in this process, with many more steps expected soon.
© Financier Worldwide
BY
Richard Summerfield