Regulating AI in the life sciences sector

January 2023  |  TALKINGPOINT | SECTOR ANALYSIS

Financier Worldwide Magazine

January 2023 Issue


FW discusses regulating AI in the life sciences sector with Samantha Silver, Sarah-Jane Dobson, Charlie Whitchurch and Paula Margolis at Kennedys Law LLP.

FW: What are the key risks that artificial intelligence (AI) technologies pose to the life sciences industry?

Silver: There are many different types of artificial intelligence (AI) models that are built in different ways. In some of these, one of the most prominent risks is the presence of bias in a product’s (AI) model, giving rise to discriminatory outcomes. This typically arises where the data collected to train the model is not adequately representative of the cohort in which it intends to operate. For example, a system designed to diagnose cancerous skin lesions may prove ineffective, where it has been trained with a data set that does not exhibit a diverse range of skin pigmentation. Automated, adaptive algorithms that learn on their own also pose considerable risks as they can potentially cause a product to evolve, and therefore not work as intended, potentially compromising patient safety. Sensitive, personal healthcare data is also often vulnerable to third-party hackers. If AI systems are not robust against attack, products may be maliciously hijacked or clinical information may be stolen, resulting in harm to the individual and reputational damage to the operating company.

The legal framework, or future legal framework, applicable to AI-containing products in Europe is twofold: product regulatory and product liability.
— Paula Margolis

FW: How are regulators in the European Union (EU) and the United Kingdom (UK) viewing the potential risks posed by emerging digital technologies, including AI?

Dobson: A long-held goal of European regulators is to promote innovation and ensure that individuals and businesses globally can embrace digital technologies, including AI, while feeling safe and protected. European regulators have for some time grappled with how to appropriately regulate AI, given the unique quality of AI as an intangible and changeable product, and because of the novel risks posed by this technology. Given these issues, historically the potentially applicable regulations to AI have been wide-ranging. Over recent years, the European Union (EU) has introduced a raft of actual and draft legislation, including the General Data Protection Regulation (GDPR), the Omnibus Directive and the General Product Safety Regulation (GPSR), which aim to safeguard users against the risks that these technologies can present, such as cyber security vulnerabilities and data breaches. In particular, the GPSR, which received provisional agreement from the European Council and European Parliament on 29 November 2022, addresses the product safety challenges presented by AI, such as a product’s evolving, learning or predictive functionalities. In the medical devices sector, the EU Medical Device Regulation (MDR) and In Vitro Device Regulation (IVDR) made substantial changes to the existing regime to address AI-specific risks such as adaptive algorithms and machine learning programmes. Notwithstanding these existing laws, the European Commission (EC) considers it necessary to introduce AI-specific rules to address the unique risks and challenges posed by AI. In April 2021, the EC published the world’s first proposal for regulation of AI, known as the Artificial Intelligence Act. The EC has adopted a risk-based approach, so that the ‘riskiest’ forms of AI are subject to the most stringent requirements and obligations. Certain ‘black-list’ AI technologies, such as those likely to cause physical or psychological harm, are completely prohibited. For the life sciences industry, this means that high-risk products, for example AI-powered medical devices, will be subject to stringent obligations, including conformity assessments, human oversight and continued maintenance.

Whitchurch: In a similar vein, the UK is also proposing to update its mainstay product safety regulatory framework and its medical device regulatory framework to reflect the risks posed by emerging technologies. However, in comparison to the EU, it has taken a more innovation-centric approach to AI regulation, with the stated goal of remaining an “AI and science superpower”. Its AI national strategy places significant emphasis on long-term investment in the AI ecosystem and, notably, it has invested over £2.3bn into AI initiatives since 2014, with over £250m going toward the life sciences industry. Although the UK government is yet to publish a draft regulation, it has set out its stance in a policy paper published in July 2022 which focuses on establishing a pro-innovation approach to AI. This will be “risk-based”, with any future regime to focus on high-risk concerns, rather than hypothetical or low-risk AI, so as to avoid barriers to innovation.

Most life sciences companies will undoubtedly already have implemented procedures to address the risks posed by AI, but it is important that these are continually reviewed.
— Charlie Whitchurch

FW: Could you explain the regulatory framework governing AI technologies in respect of products in the EU and UK?

Margolis: The legal framework, or future legal framework, applicable to AI-containing products in Europe is twofold: product regulatory and product liability. As is the case with more traditional products, these complementary legislative regimes are designed to ensure safety of the products when they are first being marketed, and also to provide compensation in the event an adverse incident occurs. From a product regulatory perspective, there is currently no standalone regulation in the EU or UK that explicitly regulates AI. However, currently, companies are required to comply with obligations that have more impact on AI products that are contained within sector-specific regulation. The medical devices regime, namely the MDR and IVDR, and its predecessor, are well recognised as more comprehensively dealing with software and AI concepts. Much debate has also been generated about the applicability of more general product safety regimes to AI generally, also.

Dobson: Once enacted, the proposed AI Act will be the primary legal framework governing AI within the EU, applicable generally to all sectors and designed to maximise harmonisation across the EU. The hierarchy of the applicable legal frameworks to AI-containing products, and overlap with existing sector-specific obligations, will therefore be complex and potentially difficult for companies to navigate in practice – although this arguably longstanding issue has been generally well addressed by clear legislative principles as to supremacy of law for more traditional products. In recognition of this, industry stakeholders are calling for the AI Act to be properly aligned with sector-specific regulation. For example, MedTech Europe was quick to highlight certain duplications between the AI Act and the pre-existing obligations within the MDR and IVDR, and seek clarification for a coherent and clear regulatory environment. As medical device and healthcare technologies become increasingly rooted in AI, one can see how the life sciences sector may be disproportionately impacted by the AI Act. The UK government seems to have adopted a different approach to the EU in respect of product regulation that is applicable specifically to AI. Pending a white paper on the topic of AI regulations expected before the end of 2022, the UK has taken the softer stance of asking existing regulators to implement a set of ‘cross-sectoral principles’ in dealing with AI products, arguably reflective of the pro-innovation approach the UK is taking in an effort to show that its economy remains ‘open for business’ in a post-Brexit world. Sector-specific regulator, the Medicines and Healthcare Regulatory Agency, published a ‘Software and AI as a Medical Device Change Programme’ which aims to build upon its wider proposed regulatory reform of medical devices and provide a clear regulatory framework for software and AI that delivers a high degree of protection for patients and the public, as well as ensuring that the UK is recognised globally as a home for responsible innovation for medical device software on the global market. Further guidance from the MHRA is expected before the end of 2022.

FW: To what extent do AI users have a means of seeking redress if they are harmed by AI products?

Silver: In terms of liability, on 28 September 2022, the EC proposed a “targeted harmonisation of national liability rules for AI”, known as the AI Liability Directive, which would enable individuals that have been harmed by AI to sue the AI “provider”. The AI Liability Directive was introduced in parallel with the EU’s proposed revision of the 37 year old Product Liability Directive (PLD), the mainstay legislation governing redress for defective products, which was perceived by many as being outdated. Both regimes will make it easier for consumers who are harmed by AI to bring claims for damages arising from AI failures or for non-compliance with the AI Act.

Non-compliance with AI-related product regulations often risks substantial financial penalties, which vary greatly depending on the source of obligations.
— Sarah-Jane Dobson

FW: What impact could the proposed EU Artificial Intelligence Act have on life sciences companies in and outside of the EU?

Whitchurch: The impact of the EU AI Act will be wide-ranging and also has global reach. The EU has been clear from the outset that it wanted to set global standards for the regulation of AI. The AI Act will have very broad application, applying to providers and users of AI systems inside and outside of the EU. Companies across EU supply chains will need to conform to these regulations once in force, regardless of where they are domiciled globally. Some predict that this additional complex regulatory burden could very well hinder innovation and potentially stall production timelines, seeing a delay in AI technologies reaching the market.

FW: What complexities arise with regard to who, or what, should be held accountable in the event of AI-applications resulting in damaging outcomes? What liability risks face life sciences companies that use AI?

Silver: The development of AI systems involves numerous parties working to bring a product to market, which once operational, is either partially or entirely autonomous in its thinking and development. So when harm arises, identification of the responsible party will inevitably be complex. The EC has attempted to address this issue through the AI Liability Directive, which acknowledges that the specific characteristics of AI, including “complexity, autonomy and opacity”, also referred to as the ‘black box effect’, can make it extremely difficult for those affected to identify the liable person and succeed in any subsequent claim that may be brought. The AI Liability Directive therefore proposes placing more onerous obligations on companies in respect of disclosure, enabling those affected to better understand the AI system and making it easier to identify those potentially liable. With AI being used across the life sciences sector globally, from drug discovery to clinical trials to medical technology, life sciences companies face myriad AI-related liability risks. The EC’s recently published proposals for revision of the PLD brings AI within its scope, meaning that if it is enacted in its current form, affected individuals will have another legal route, in addition to the AI Liability Directive, to pursue compensatory damages arising from harmful AI systems. Notably, these proposals introduce provisions which aim to alleviate the burden of proof for claimants bringing complex claims in relation to AI products, including a rebuttable presumption of defect and causation in certain circumstances, for example where the court considers that it would be excessively difficult for a claimant to prove its case.

FW: What are the risks of non-compliance with AI-related product regulations?

Dobson: Non-compliance with AI-related product regulations often risks substantial financial penalties, which vary greatly depending on the source of obligations. For example, the GDPR and UK GDPR set maximum fines of €20m and £17.5m, respectively. For breach of product safety regulations, there are potentially unlimited financial sanctions and criminal sanctions applicable. Companies found to be in breach of the AI Act, once in force, could face similar significant penalties. Non-compliance with article 5 relating to ‘Prohibited AI Practices’, where blacklisted products are placed on the market, will trigger a €30m fine or up to 6 percent of total worldwide annual turnover, whichever figure is higher. Any other form of non-compliance with the AI Act will carry a fine of up to €20m or 4 percent of total worldwide annual turnover, while supply of incorrect, incomplete or misleading information to the regulator triggers a fine of €10m or 2 percent of total worldwide annual turnover. Should non-compliance persist, the AI Act grants market surveillance authorities the power to undertake a forced recall of a company’s product. In the event that non-compliance results in harm, companies also expose themselves to civil liabilities.

If we need a real life indicator of how new technologies can be subject to group actions, we need look no further than the GDPR regime.
— Samantha Silver

FW: What tips do you have for life sciences companies to enable them to come to terms with proposed AI regulation?

Whitchurch: Preparation is key. Most life sciences companies will undoubtedly already have implemented procedures to address the risks posed by AI, but it is important that these are continually reviewed and updated to ensure that they adequately respond to the types of AI systems deployed by the business. It is also critical that appropriate governance and risk management frameworks are in place to ensure that businesses are in a position to mitigate AI risks and to foster the safe and responsible deployment of AI.

FW: What other potential legal risks do you anticipate arising in the future, in view of the ongoing development of AI technologies?

Margolis: One of the most talked about legal risks facing all sectors, but particularly those operating in the business of developing AI technologies, is the growing risk of group litigation, also commonly referred to as collective or class actions. This type of mass consumer-led litigation is fast gaining momentum, particularly in Europe. The EU’s Representative Actions Directive (RAD), which provides an EU-wide mechanism for cross-border collective actions to be brought in respect of infringements of EU laws and regulations, is due to be incorporated by member states into their national laws by December 2022, to take effect by June 2023. Although some member states are behind schedule, many already have draft legislation in place, so we could start to see a sharp uptick in group actions from mid-2023 onwards. Significantly, the RAD’s annex I lists all EU laws and regulations in respect of which a collective action can be brought, and it is expected that the AI Liability Directive will be added.

Silver: If we need a real life indicator of how new technologies can be subject to group actions, we need look no further than the GDPR regime, with a significant number of large-scale data breach claims having been brought since the introduction of the GDPR in 2018. The significant use of AI technologies within life sciences, coupled with increasing regulation and a growing appetite for collective actions, means that we can expect to see more group litigation in the future.

Samantha Silver leads Kennedy Law LLP’s products law and life sciences team. She has over 22 years of experience and became a partner in the London office in 2015. She advises on public inquiries, global product recalls and multijurisdictional product liability claims, including group litigation orders, with a focus on the pharmaceutical, medical device and consumer sectors. The claims she handles are often of a commercially sensitive nature. She can be contacted on +44 (0)20 7667 9358 or by email: samantha.silver@kennedyslaw.com.

Sarah-Jane Dobson is a partner in the London office. She is an international products lawyer. She acts on regulatory, litigious and policy matters across the full product life cycle in respect of product safety, compliance and product liability issues. Her practice is focused on multijurisdictional matters for corporate clients across a range of sectors including consumer goods, cosmetics, chemicals, food (including novel foods) and beverages, life sciences, industrial and automotive products. She can be contacted on +44 (0)20 7667 9677 or by email: sarah-jane.dobson@kennedyslaw.com.

Charlie Whitchurch is an associate (Australian qualified lawyer) in Kennedys’ London office. He represents manufacturers and insurers in relation to product liability disputes and product safety matters across a variety of industries, with a particular focus on life sciences. He has experience handling product recalls and complex, high value medical device claims across multiple jurisdictions. He can be contacted on +44 (0)20 7667 9224 or by email: charlie.whitchurch@kennedyslaw.com.

Paula Margolis qualified as a solicitor in 2011. In her previous role as a senior associate at Kennedys, she represented insurers and manufacturers in relation to product liability claims, mass torts and recalls within the medical device, pharmaceutical and consumer sectors. In 2021, she joined Kennedys’ corporate affairs team and takes responsibility for identifying and analysing business critical issues and emerging risks, including the impact of legal and political shifts, on businesses and insurers. She can be contacted on +44 (0)20 7667 9367 or by email: paula.margolis@kennedyslaw.com.

© Financier Worldwide


©2001-2024 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.