Q&A: AI regulation in the US

September 2024  |  SPECIAL REPORT: DIGITAL TRANSFORMATION

Financier Worldwide Magazine

September 2024 Issue


FW discusses AI regulation in the US with Helen Christakos, Rachel Kim, Daniel Mitz, Daren Orzechowski and Alex Touma at A&O Shearman.

FW: Could you provide an overview of how artificial intelligence (AI) developments in the US are driving the need for a legal and regulatory response? What practices and types of deals are at the forefront?

Orzechowski: From the start of almost every new project of late, questions about generative artificial intelligence (GenAI) arise. We see concerns about AI, especially as we head into an election cycle. Key areas in the US are intellectual property (IP), data privacy, consumer protection, deepfakes, national security, decision automation and bias concerns. The US has not passed a lot of legislation compared to other jurisdictions like the European Union (EU), but existing laws still apply regardless of whether technology is used. We see venture capital and growth equity investment transactions but also M&A. Many companies are looking to build out their technology platforms by acquiring smaller, quickly scaling AI companies with synergistic technology that further automates their operations. As companies try to identify all the different applications of AI, transactions focusing on licensing, alliances, joint ventures and research and development are popular ways to provide a means to quickly get into the space.

Christakos: Regulators and lawmakers recognise that existing laws do not address AI technology, and are actively responding by proposing new legislation in this area. There have been numerous proposed bills at both the federal and state level. Although it is unlikely a federal AI bill will be passed soon, approximately 10 states have enacted standalone AI laws, and there are numerous other laws under consideration in state legislatures. Within a few years, it would not be surprising if most states have standalone AI laws. These state laws generally regulate GenAI decisioning without significant human oversight in critical functions, such as access to financial products and services, healthcare services, insurance products and educational opportunities. Currently, many uses of AI do not come within the scope of these laws. In addition, there are eight comprehensive state privacy laws currently in effect which also regulate automated data processing. The requirements of all these laws must be addressed in privacy policies, commercial transactions, the design and development of products and services, as well as M&A transactions.

Touma: AI, and in particular GenAI technology, is testing the limits of existing laws, which were designed for a different era. For example, IP laws do not provide clear guidance on what rights GenAI developers have with respect to using works of authorship to train large language models (LLMs). Regulations in many jurisdictions do not expressly address the use of data with GenAI technology, which may retain and unintentionally share data with unauthorised third parties. And many jurisdictions have not yet implemented laws and regulations necessary to address new issues arising from GenAI technology, including deepfakes, AI-generated voices, election interference issues and scams. Sophisticated GenAI users are relying on indemnification provisions to get protection from potential legal liability. Better legal guidance would help parties better allocate risks under agreements. Clear legal guidance is necessary to encourage the widespread adoption and use of GenAI technology.

AI, and in particular GenAI technology, is testing the limits of existing laws, which were designed for a different era.
— Alex Touma

FW: How would you characterise the harmonisation of AI regulation across the world? What are the implications for companies and their advisers?

Christakos: We do see common threads between AI privacy laws in the US and other international jurisdictions. US state AI laws and laws in other jurisdictions generally focus on two key areas: transparency and bias. All laws require some type of notice of AI use and how certain data is being processed by AI tools. The notice must be reasonably fulsome such that consumers are fully aware of how their data is being processed. Laws also require that companies utilise AI tools in a way such that they do not process data in a biased manner that has an adverse impact on certain members of society. Laws vary, however, with respect to how to implement these two tenets, and that is why compliance – particularly for global businesses – is so complex.

Mitz: The global regulatory landscape makes it more complicated to go to market. I do not think it is going to get less complicated any time soon. Companies need to stay out in front of product plans to validate the regulatory landscape and proactively make decisions. One size does not fit all and not all advisers have the same multijurisdictional capabilities.

Orzechowski: There is uncertainty as to how to regulate the use of this transformative technology. We start with the US, but it gets more complicated when you go cross-border due to a current lack of harmonisation. Given the early stages of regulation, parties are having to make bets on how the law is likely to develop. There is uneasiness as to how the lines on risk should be drawn and allocated. I like to look to history and other technologies to determine how best to derisk and navigate the situation. There is a lot of similarity to how the law developed in the early stages of cloud computing before participants had a clear understanding of what contract terms were needed in a variety of deals. We can also learn from privacy law and how, for example, US law developed in parallel with the General Data Protection Regulation (GDPR) in the EU.

While not a lot of US AI legislation has been passed, a number of bills have been introduced at the state level and federal level.
— Daren Orzechowski

FW: What do you consider to be the key areas for potential AI regulation in the US?

Touma: GenAI tools require a large amount of content to train models. The use of unauthorised content has given rise to numerous IP and other lawsuits in the US. Without clear legal guidance or if the outcomes of the various lawsuits impose large liabilities on GenAI developers, this may deter the development and use of AI technology in the US, which may put the US at a competitive disadvantage as compared to other jurisdictions that are more quickly regulating such technology. Regulations that both enable the use of GenAI and protect the rights of content authors are needed to keep the US competitive. Other areas ripe for regulation include addressing the use of GenAI technology for malicious purposes, such as in connection with deceptive practices, including election interference or scams.

Orzechowski: While not a lot of US AI legislation has been passed, a number of bills have been introduced at the state level and federal level. At the federal level, there seems to be a fair amount of concern regarding deepfakes and means for online viewers to understand the source of what they are seeing and its authenticity. Concern is heightened with upcoming elections. There were questions about whether we would see IP legislation. It currently seems that the IP law will be first developed through the courts with guidance from various IP-focused agencies like the Copyright Office.

Christakos: Regulation to date has focused on the use of GenAI for independent decisioning in critical functions. For example, New York City Local Law 144 requires those using an AI tool to hire New York City-based employees and independent contractors to conduct annual independent audits of the AI tool for bias against characteristics, such as race, not use the AI tool if more than a year has passed since the most recent bias audit and publicly post a summary of the audit results on the career section of an employer’s website. The law also requires that the employee or candidate be notified that an AI tool is being used at least 10 business days before such use, allow a candidate to request an alternative selection process or accommodation, and notify the employee or candidates of the job qualifications and characteristics that will be used to assess the employee or candidate.

It is important to make sure a company knows how AI is being used in its organisation – whether finance, development or marketing.
— Daniel Mitz

FW: Could you provide an insight into the privacy concerns that arise from AI’s reliance on large volumes of data, information and images?

Christakos: Data is now considered the most valuable asset on earth. Many emerging and established companies rely on web scraping to get the data they need to operate and grow their business – from training AI technology, to conducting price comparison between products and to performing identity verification. In the US, there is no singular legal or regulatory framework that governs the collection and use of data for purposes of training AI. We have a patchwork of case law, state statutes and executive orders which govern the way data can be collected and how it can be used to train AI. The Computer Fraud and Abuse Act is one of these key laws. It imposes civil and criminal liability for improperly accessing a ‘protected computer’ – that is, any computer connected to the internet. Whomever intentionally accesses a computer without authorisation or exceeds authorised access, and thereby obtains information from any protected computer, will be punished.

Touma: AI tools require a large amount of data and content to train the tools. If this data includes personal information, such as an individual’s name and address, the processing of such data by an AI tool, and the use of output from the tool containing such data, may expose the user of such data to legal liability, including regulatory and contractual liability. Many jurisdictions are passing laws to address the risks raised by AI tools, including GenAI. If companies are not seeking proper counsel regarding the changing regulatory and legal landscape with respect to AI, they may be violating laws without knowing. Companies should ensure that they are obtaining up to date information regarding legal developments and best practices, including notice, consent, and data storage and retention requirements.

FW: What steps should companies take to manage governance issues around their use of AI?

Orzechowski: Companies need to be realistic and take a practical approach. Even if they tell their employees not to use AI, employees are going to because the technology provides a way to work more efficiently. Everyone wants to make their jobs easier, so this is a natural result. When companies take the approach of banning the technology, we see a lot of issues arise. A better approach is to thoughtfully look at the business case for using the technology, license and provide an appropriate AI solution from a trusted provider that has cleared all of the organisation’s information security standards – especially with respect to how it is hosted and deployed – and then set and socialise rules of engagement for employees’ authorised use of AI. Giving team members an approved AI solution usually produces a better result.

Mitz: There is pressure on companies internally and from investors to use technology to deliver products and services, as well as to streamline their operations. Everyone is looking for a use case that can deliver on these results. Companies need to carefully evaluate the costs and benefits. Companies should be concerned about how AI is used in their businesses but if they take an appropriate and deliberate approach, AI can increase productivity and lower costs. In any event, even if a company prohibits its employees from using AI, employees are going to use it because the technology is new and exciting.

Christakos: If a company’s competitors are using AI, it should too. Companies need to create policies and regulations for employees to ensure that their products and services are leveraging AI in a way that complies with applicable laws – especially if they are scraping data to train AI and machine learning tools or using GenAI for independent decisioning in critical functions.

Touma: AI technology, in many cases, is desired by employees and can ultimately lead to operational efficiencies, cost savings and revenue generation. Companies should embrace AI technology and understand the various types of AI tools that are used or desired by employees. Companies should implement policies governing the use of AI technology, or prohibiting their use. Employees should be trained to understand the policies and the risks associated with the use of such tools so that they make informed decisions about how and when to use AI tools. Companies may consider implementing technical measures to actively block the use of risky tools. If a company decides to use third-party tools, it should ensure that the applicable licence agreements provide the proper rights and restrictions on both the company and the counterparty, such that the company is reasonably protected from legal liability.

Comprehensive due diligence with an additional focus on a target company’s AI-related capabilities and risks are becoming more crucial.
— Rachel Kim

FW: What emerging trends are you seeing in M&A transactions focused on AI?

Mitz: Much of this focus is standard ‘build it’ or ‘buy it’ as well as time to market. When a company decides to buy another company that either uses AI to deliver its product or service or is an AI-based application, then there is more focus on how the application, product or service is created and delivered. Companies have a broad definition of what AI is and it may be different than how others define AI. Also, there is a need to focus on the acquirer’s go to market plans for the acquisition. Due to the regulatory standards that differ from jurisdiction to jurisdiction, a product or service may not be able to work in the same way in each jurisdiction. Of course, how a model is trained should also be a focus to ensure IP or privacy laws are being complied with.

Orzechowski: AI is bringing about changes in how due diligence is conducted and the typical representations and warranties (R&W) package that we see. This is for all sizes of deals. For due diligence, there is a greater focus on understanding how the technology was developed and trained and then how the outputs are generated. For a commercial AI service, understanding what indemnification and other risk coverage is given to customers is very important. Uncapped liabilities to a large customer base can be of particular concern to a buyer given the current state of the law in this area. That due diligence is then turning into more specific disclosure representations beyond the basic IP non-infringement and compliance with laws type representations. The risks presented also raise concerns for R&W insurance, so it may be that in some deals where GenAI is involved, special indemnities are necessary.

Christakos: AI has brought about a significant change in M&A. We now have a separate workstream for AI due diligence – separate R&Ws, separate defined terms and separate diligence questions. In R&W insurance deals, R&W insurance coverage is discussed with insurers.

Kim: Given the complexity and nascent nature of many AI technologies, comprehensive due diligence with an additional focus on a target company’s AI-related capabilities and risks are becoming more crucial. For example, antitrust considerations must be part of the due diligence process, given that as AI technologies become more prevalent, regulatory bodies are scrutinising M&A transactions to assess potential anticompetitive effects. Also prevalent is performing due diligence on a target’s insurance policies with a special focus on cyber insurance providing coverage for AI-related exposures such as privacy and copyright-related liabilities. In addition to standard contract diligence issues, a target’s contracts for AI products and services should be reviewed to consider how the target has allocated the various risks inherent in offering or using AI products and services, such as indemnification obligations, limitations on liability and dispute resolutions, between contracting parties.

Although it is unlikely a federal AI bill will be passed soon, approximately 10 states have enacted standalone AI laws, and there are numerous other laws under consideration in state legislatures.
— Helen Christakos

FW: In the venture capital space, what trends would you highlight in connection with AI focused start-ups?

Kim: With ongoing excitement around building AI technology, there is a growing emphasis on specialised AI applications. Many start-ups are problem solving for targeted issues faced by targeted sets of customers, instead of focusing on developing ‘generic’ AI solutions. Venture capital funding is increasingly flowing into a diverse range of companies across sectors such as healthcare, FinTech, mobility, logistics, sustainability and autonomous systems, which reflects AI technologies’ broad applicability and accessibility for smaller enterprises and industries traditionally outside the technology space. There is also a shift from building specialised AI technology, to turning that around to commercialisation and monetisation by getting the technology into consumers’ hands and finding business uses. This is driving significant investment interest, both domestic and international. We have been seeing heightened investment activity into US AI companies by foreign investors, as they search for AI start-ups with cutting-edge technologies to provide strategic guidance and resources to support scale up.

Touma: There are a lot of emerging service providers in the GenAI space. Because of legal uncertainty regarding the risks of training of LLMs with publicly accessible or other content and potential infringement claims regarding output generated from GenAI technology, service providers may be less willing to offer uncapped indemnification provisions for this liability. This contrasts with indemnification for infringement caused by use of a platform, such as patent infringement, for which many service providers have historically provided uncapped indemnification. Uncertainties with the legal landscape regarding GenAI and potential liability are causing emerging companies to rethink their approach to contractual risk allocation.

FW: What essential advice would you offer to companies looking to navigate the uncertainties related to AI?

Touma: Companies should rely on advisers who understand both how the technology works and what rights and obligations apply to the developers and users, both under the law and applicable contracts. Because AI technology may retain processed data and later disclose such data to unauthorised third parties, many companies have started to implement company-wide policies on AI technology to inform employees about the risks of inputting sensitive company data into AI technology. Such policies are also addressing other areas of concern, such as the use of AI technology in connection with hiring, which could lead to liability if personal data is used with the AI technology or if the AI technology may provide biased output. Or in connection with creating output used by the company, which could subject the company to IP infringement claims due to the pervasive use of unauthorised third-party IP in training LLMs.

Orzechowski: Hiring reliable staff and experienced advisers is important. This includes not only lawyers but also technologists and other experts. Always start by considering what problem you are solving with a new technology. If there is a good use case, either for internal use or something that is customer facing, then consider how to develop this use case and handle change management, whether through organic growth and development or M&A. There are many paths. Sometimes a company may want to move fast and break things in the process to develop better technology; however, for a lot of larger organisations, they will want to roll out guardrails to manage risk while providing a means for their teams to use this exciting technology. No one wants to fall behind or miss out on the promise of AI, but there are ways companies can protect themselves and their investors.

Mitz: It is important to make sure a company knows how AI is being used in its organisation – whether finance, development or marketing. Evaluate the tools employees are using and why. There are likely several options, some of which may be better options from a legal perspective and some of which a company does not want being used in its organisation for reasons that are not always obvious at first glance. Sometimes it might be better to build than buy.

Kim: Companies should be intentional and strategic in their approach to dealing with AI in the current environment. It is important that companies prioritise ethical considerations in AI development and deployment, including utilising sufficient resources to continuously assess and understand how AI systems and tools are used to make decisions and handle data within the company. Companies should also proactively implement risk assessment and mitigation strategies to address potential biases in AI algorithms and changing and developing regulatory compliance regimes such as the GDPR and the California Consumer Privacy Act. Earlier stage companies that may not have sufficient resources to allocate to risk management should engage with industry peers and regulatory bodies to stay informed about AI trends, regulations and best practices.

 

Helen Christakos leads A&O Shearman’s US privacy team and has broad experience spanning transactional, counselling and contentious matters. She has experience negotiating the privacy and cyber security issues in M&A, private equity, financings, IPOs, SPACs, asset purchases and other commercial transactions, product counselling (developing products and services that comply with privacy and data security laws), developing privacy and security policies, and managing cyber security incidents and breaches. She can be contacted on +1 (650) 388 1762 or by email: helen.christakos@aoshearman.com.

Rachel Kim is a senior associate in A&O Shearman’s capital markets practice. She focuses on capital markets and emerging growth and venture capital transactions. She represents public and private companies, investment banks and sponsors in corporate finance, including private and public securities offerings, and also advises on a broad range of disclosure and corporate governance matters. She can be contacted on +1 (650) 838 3752 or by email: rachel.kim@aoshearman.com.

Daniel Mitz is a technology M&A lawyer as well as a technology sector lead at A&O Shearman. He represents companies in domestic and cross-border M&A transactions. He has extensive experience advising clients on corporate governance and fiduciary duty matters. Mr Mitz has lived and worked in Silicon Valley for the past 25 years and is ranked in Chambers. He can be contacted on +1 (650) 838 3805 or by email: daniel.mitz@aoshearman.com.

Daren Orzechowski is technology sector lead at A&O Shearman. He is a partner in the Silicon Valley office focusing on M&A and technology transactions. For more than 25 years, he has represented clients from a variety of industries, many of which touch AI and related technologies regularly, in connection with their most important strategic intellectual property, data and technology-focused transactions. He is ranked by Chambers, Legal 500, IAM Patent 1000, and IAM Strategy 300: The World’s Leading IP Strategists. He can be contacted on +1 (650) 388 1701 or by email: daren.orzechowski@aoshearman.com.

Alex Touma is a partner in A&O Shearman’s technology transactions practice in San Francisco. He specialises in transactions and counselling involving IP and technology, representing companies of all sizes and in a variety of industries, including generative AI, blockchain, cryptocurrency, FinTech, software, semiconductor and mobile technology companies. He also advises companies on technology-related matters including technology development and licensing agreements, manufacturing and supply agreements, distribution agreements, hosting agreements and sourcing transactions. He can be contacted on +1 (415) 796 4161 or by email: alex.touma@aoshearman.com.

© Financier Worldwide


©2001-2024 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.