Unique issues with AI diligence and representations in M&A

August 2024  |  SPOTLIGHT | MERGERS & ACQUISITIONS

Financier Worldwide Magazine

August 2024 Issue


Generative artificial intelligence (GenAI) technologies can create new expressive content in various media, but present novel legal challenges not fully addressed by traditional due diligence processes.

The unique legal issues with GenAI require companies to conduct AI-specific diligence and obtain AI-specific representations in M&A transactions. It is also important to have someone well-versed in AI on the deal team. Many subtle issues, if not properly understood and addressed, can lead to liability or loss of business value.

This article will cover some of the key issues, but it is important to understand that the AI issues will be unique depending on the target company’s involvement with AI. A company’s role may include one or more of collecting data to license others to train AI, training AI, developing AI apps, using third party AI apps, fine tuning third party AI apps and using AI content generated by third parties, among others.

Outlined below are some of the topics that should be covered and why they are important.

First, does the company have an AI governance committee and AI policy? Companies that do not have governance and policies are more likely to have employees using AI in ways that create legal risk and potential loss of assets.

Second, seek disclosure of AI products and services used by the company. It is important to obtain disclosure of each of the AI technologies used by the target, including the specific version of the technology and the terms that govern use of that technology. Many AI tools have free and paid versions. In general, the free version has much riskier terms, some of which are addressed below (e.g., confidentiality of inputs and indemnities).

Third, if the company is training AI models, does it have the legal right to use the content to train AI models? Diligence should ensure that the company legally obtained the content used (and it was not improperly obtained, e.g., via web scraping), that the company has complied with any licences under which the content was obtained and that, even if the company legally possesses the content, it has the right to use that content for the purpose of training AI.

Some companies have collected user data for years but the privacy policy under which it was collected did not address the use of that data for purposes of training AI. Training AI on content for which the company does not have the legal right can lead to ‘algorithmic disgorgement’. This requires a company to destroy the content, the AI models and any algorithms that were created based thereon. This remedy has been enforced by the Federal Trade Commission against multiple entities. If part of the value to the deal is a company’s AI models, this is a critical issue to assess to avoid potential loss of value.

Fourth, if the company relies on copyright to protect its assets (e.g., music, images and other media), does it use GenAI to create these assets? If so, this may be a problem because the output of GenAI is typically not subject to copyright protection because it is not deemed human authored. This means that there may not be any copyright protection for these materials.

Fifth, has the company granted any indemnities to AI tool providers? Standard diligence will seek identification of any indemnities that the company has granted, but it is prudent to ask this specific question. The reason is that some companies may not be aware that they granted such an indemnity. Many AI tools have a free, individual version and a paid, enterprise version.

Many employees use the free version as part of their job (if a company does not have a policy prohibiting such use) and typically do not read the terms of service. However, the terms for many free versions require the user to indemnify the tool provider if the output infringes.

Sixth, has the company used any AI tools that do not treat user inputs as confidential? If so, and employees have entered confidential information, that information may no longer be confidential. And the AI tool can output that information to other users. This is not theoretical. It has happened.

Seventh, does the company use any AI code generators? If so, has it updated its open source policy to manage the open source legal risks associated with such use? These tools leverage AI to assist code developers by using AI models to auto complete or suggest code based on developer inputs or tests.

These tools are typically trained on open source software which are free to use but come with licence conditions that must be complied with. If the output of an AI code generator is used in software the company is developing, it may need to ensure compliance with the license obligations. Most open source licences permit the user to copy, modify and redistribute the code.

However, the conditions vary by licence and can range from simple compliance obligations (e.g., maintaining copyright information) to more onerous, substantive requirements. The more substantive provisions can require that any software that includes or is derived from the open source software must be licensed under the open source licence and the source code for that software must be made freely available. This permits others to copy, modify and redistribute the software for free. For companies that develop software to license it for a fee, this can be a huge problem and can cause loss of return on the money invested to develop that software.

Eighth, has the company conducted vendor diligence on the AI tools it uses? Most companies conduct general technology diligence before adopting a new technology. However, there are several AI-specific vendor diligence questions that companies need to ask. Yet, many companies have not yet updated their vendor diligence checklists to address AI. One of the reasons this is important is that the law is developing to impose liability on both developers and deployers of AI technology. This means that if the company is using an AI tool, it needs to ensure that such use will not create liability for the company.

Some of the topics that are being overlooked include whether the AI tool was trained on content that the vendor legally possessed and had a right to use for AI training, and whether the vendor used responsible AI development techniques to avoid biased and discriminatory output, among other things. The Equal Employment Opportunity Commission held a company liable for its use of an AI recruiting tool that discriminated based on applicants’ age, even though it did not develop that technology.

Ninth, does the company exclusively own the content that it generated via AI tools? Some AI tools do not grant users exclusive ownership of the output of the tool. In some cases, the tools require that the user grant the tool provider a licence to use any output for its own purpose. In some terms of use, the tool provider makes clear that the tool may generate the same output for another user. For at least these reasons, the company may not exclusively own the content it is using. In some cases that may be important. In other cases, it may not. But it is important to assess these issues.

Lastly, regulatory compliance should be addressed for certain types of tools and uses of tools. Depending on what an AI tool does and how the company uses it, there may be specific regulatory issues with which the company must comply, and diligence should determine whether and how the company complies. For example, there are specific laws addressing the use of AI tools for employee, consumer finance or housing decisions.

Additionally, certain AI tools output medical or legal information. It is important to ensure the outputs do not cross the line and give medical or legal advice, which can constitute the unauthorised practice of medicine or law. Other regulatory issues may be relevant based on the function of the tool or the company’s use.

Conclusion

The scenarios outlined above are just some of the reasons that AI-specific diligence is needed and some of the topics that should be covered. It is important to include someone knowledgeable of AI law and technology on the deal team to ensure the relevant diligence issues can be identified and addressed based on the specifics of each deal and be able to evaluate the responses and draft and negotiate appropriate representations in the deal documents. If the firm handling the transaction does not have deep AI experience, it should consider hiring special AI counsel for these issues.

 

James Gatto is a partner at Sheppard Mullin. He can be contacted on +1 (202) 747 1945 or by email: jgatto@sheppardmullin.com.

© Financier Worldwide


BY

James Gatto

Sheppard Mullin


©2001-2024 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.