Competition issues arising from AI in the life sciences sector

November 2024  |  SPOTLIGHT | RISK MANAGEMENT

Financier Worldwide Magazine

November 2024 Issue


The use of artificial intelligence (AI) in the life sciences sector is rapidly growing in an increasing number of contexts. For example, it is increasingly being used in research as a discovery tool, to identify candidate molecules for further investigation. AI enables researchers to significantly accelerate the process of winnowing down the number of potential candidate molecules to identify those that may have the most potential to be developed into innovative drugs.

AI is also used being used in diagnostic contexts, for example in the analysis of medical images, to assist doctors and other healthcare providers in diagnosing patients. AI systems that have been trained (or fine-tuned) reviewing thousands of medical images have proven useful in helping doctors accurately interpret MRI scans and make recommendations about the best treatment for patients.

Increasingly, AI is being deployed in therapeutic applications, for example to predict risks and customise treatment plans enhancing patient outcomes. It has been used in various aspects of diabetes management, such as advanced diagnostics, predictive modelling and personalised patient care.

Finally, AI is also being used in regulatory use cases, for example in the triage and categorisation of adverse event reporting, including reports from patients and healthcare professionals. AI tools can notably be used to analyse conversation data to highlight issues in processes and communication.

Access to data sets by competing AI model developers

Competition laws apply to the development and deployment of AI systems in the European Union (EU). A number of European competition authorities have already begun to consider whether there are potential ‘bottlenecks’ related to the development and deployment of AI models, including the potential for access to data, that is required to train and fine-tune AI models, to limit the ability of new entrants to compete with existing players.

While there is, as yet, no specific guidance regarding circumstances where such bottlenecks might arise in the context of AI models, European Court of Justice case law (such as Bronner (C-7/97), Microsoft (T-201/24) and Huawei (C-170/13)) provides a potential framework for such analysis. Importantly, those cases suggest that such data sets would need to be essential to develop the relevant AI models and systems before there would be competition concerns.

Specifically, it would need to be shown that: (i) the data must be indispensable to create an AI model or system; (ii) in the absence of such data, there could be no effective competition on the downstream market; (iii) the refusal to provide access to the data would prevent the development (or tuning) of the AI model or system; and (iv) there is no objective reason for the refusal. This framework clearly sets a high bar for identifying circumstances where access to data represents such a bottleneck.

Perhaps as a result of this high bar, competition authorities have framed potential concerns as relating to the possibility that freely available data may be fully exploited (or only be augmented slowly), such that AI model developers without access to proprietary data that they produce themselves have to purchase it, thereby increasing their costs.

This concern is curious, since it appears to ignore the fact that all AI model developers are currently using public non-proprietary data to train and fine-tune their models and, in the vast majority of cases, also licensing in (on a non-exclusive basis) third-party proprietary data for training and fine-tuning, alongside their own proprietary data. In addition, AI model developers are increasingly using synthetic data to augment those public, proprietary and in-licensed data sets.

Furthermore, this concern does not appear to take into account the costs of developing the proprietary data that is used in training and fine-tuning models. As a result, it is far from clear that the incremental costs of acquiring data are necessarily materially higher for AI model developers that do not develop proprietary data than for those that do.

That said, it may be that entities fine-tuning multiple AI systems for use in a combination of diagnostic, therapeutic and regulatory contexts may be in a position to create and use data sets that are specific to individual patients, cannot be replicated by third parties and can be used in various different deployments. It is also unlikely that synthetic data that is fungible with such proprietary data could be developed by a third party.

As a result, if there is a ‘first mover’ that has acquired a ‘dominant’ position supplying solutions into which AI is deployed, competing AI model and system developers may be unable to develop comparable AI systems without access to those data sets, which could make access to such proprietary data sets of concern to competition authorities.

Competition authorities have also noted the potential for ‘feedback effects’ (e.g., where output data from a deployed model is then used to continue to train and fine-tune the model and system) to be exclusionary, if data are provided to the model developer (exclusively) and enable it to improve its model through ongoing training and fine-tuning. However, care is required in simply assuming that model developers have incentives to vertically integrate or enter into such exclusive arrangements for feedback.

In most instances, an AI model developer’s incentive is to have its models deployed into a broad cross-section of systems and user-facing services (given the costs of development), such that its decisions about where its model is deployed are not driven by whether it can access data produced by the deployed model (for further training and fine-tuning).

It is also possible that access to very specific data sets could raise concerns in the context of candidate identification. If, for example, a development team works with a candidate discovery AI system and chooses to only pursue research for a small proportion of potential candidates identified, while contractually prohibiting the AI system operator from working with any third party to further investigate the other (unused) candidates, such a restriction could be considered to be anti-competitive.

Access to AI models and systems and interoperability

If one or two AI models and systems were to become so widely used by pharmaceutical companies that other AI systems are unable to offer comparable results, particularly in the context of research, denying the access to those models and systems may raise competition concerns.

That said, there is evidence that the price of accessing generative AI (GenAI) models through application programming interfaces (APIs), for example, has dropped significantly, from $180 for 2 million tokens to $0.75 (a 240-fold decrease). There has been an increase in the number of capable models, such that developers seeking to deploy such AI have many options. It may be that the cost of innovation is coming down (as has been the case with many other technology waves).

Given this, and the thousands of GenAI models being trained, there is limited evidence that a level of concentration that could raise exclusionary concerns is likely to develop. What is less clear is whether AI systems fine-tuned for particular use cases (including some of those identified above in the life sciences sector) could, over time, become dominant, such that their developers might be required to provide access to those models and systems.

In the context of the debate about access to AI models, competition authorities are also starting to consider appropriate ‘forms’ of access. Unlike conventional operating system-based ‘stacks’, where access and interoperability at different layers make up a menu of ‘access’ options, there are a broader set of potential access routes for AI models and systems (both open source and proprietary).

APIs provide one route to deploy AI capabilities, but there are other ways to integrate with AI systems and certain use cases can simply query existing user facing AI services for responses. The most appropriate form of access or integration can vary significantly by use case.

Exchange of competitively sensitive information and collusion

It will be important to ensure that AI systems do not become conduits for the exchange of competitively sensitive information between competitors. That risk is highest where multiple competitors use the same AI system (that uses and produces commercial competitively sensitive data), and the AI system inadvertently shares such information between competitors or provides them with output that has the same effect as sharing information (e.g., developing pricing strategies that leads to convergence around the same price point). As a result, safeguards are required to avoid exchanges of competitively sensitive information in such contexts.

Access to competitively sensitive information can also be a concern where the developer of an AI model is vertically integrated with a business that deploys the model. The AI model in this context would need to ensure that the integrated business does not have access to both input or output data relating to third-party use of the model.

In the life sciences sector, such concerns were raised in 2018 over Roche’s acquisition of Flatiron Health, which had developed a comprehensive, real-time data resource intended to improve clinical decision making, supporting research and patient outcomes. Since Flatiron’s platform was developed using data sourced from a large number of pharmaceutical companies, its acquisition by a competing pharmaceutical company raised concerns about potential access to (and use of) data that competitors had provided.

Relatedly, the US Department of Justice’s case against RealPage highlights the risk that platforms into which AI is deployed that are used by multiple competitors can go further than information exchange, and facilitate collusion. While the pricing structure of the pharmaceutical sector (at least in the EU and UK) makes the risk of such conduct low, the broader life sciences and healthcare sector is not, of course, subject to the same pricing constraints.

Conclusion

Given the speed with which AI is currently transforming the life sciences sector, competition authorities are monitoring developments and seeking to understand the dynamics of new AI-related markets, to ensure that they are able to intervene when appropriate, if competition concerns develop.

It is also possible that, in the EU, the Digital Markets Act might be expanded to include AI models and systems (or some permutation of such models and systems) as core platform services, such that ex ante obligations and restrictions might become applicable.

 

Miranda Cole is a partner and Julien Haverals is an associate at Norton Rose Fulbright LLP. Ms Cole can be contacted on +32 (2) 237 6198 or by email: miranda.cole@nortonrosefulbright.com. Mr Haverals can be contacted on +32 (2) 237 6189 or by email: julien.haverals@nortonrosefulbright.com.

© Financier Worldwide


©2001-2024 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.