Policing bot bankers: US regulation of AI in financial services

July 2024  |  SPOTLIGHT | BANKING & FINANCE

Financier Worldwide Magazine

July 2024 Issue


Like companies everywhere, US banks increasingly are deploying artificial intelligence (AI) systems to increase operational efficiency, enhance their products and services, and improve overall customer experience. And, like regulators around the globe, US banking agencies are grappling with the implications of these new technologies for their mandates to protect individual consumers and society at large. Unlike European regulators, US agencies currently lack new laws like the European Union/UK General Data Protection Regulation and the EU’s AI Act, which expressly seek to protect consumers from risks posed by AI and other automated decision-making (ADM) systems. It remains unclear whether existing US statutes are equal to the task, but they appear to be holding up so far. In this article, we survey how US banking regulators are wielding their longstanding powers to police the use of cutting-edge technology in this vital sector, and specifically around lending.

As AI system outputs substitute for human decisions, they can produce the same types of harms. Existing laws, not surprisingly, address many of those harms expressly or by delegating to regulators the discretion to do so. In regulators’ pronouncements, it has become commonplace to hear that ‘there is no AI exception’ to the laws they enforce. US banking authorities continue to urge banks to identify and manage AI’s risks to safeguard their adherence to applicable legal requirements. ‘The AI did it’ is generally no defence. Indeed, use of AI may expand banks’ exposure.

US banks and their AI systems confront a range of laws on consumer protection, national security, data privacy and security, and operational safety and soundness. Generally, the statutes and implementing regulations vest regulators with substantial discretion to apply them to over 4000 widely different banks.

A dizzying array of agencies govern US banks. State-chartered banks are regulated and supervised by their chartering state and any other states in which they operate, but they also have the Federal Deposit Insurance Corporation or the Board of Governors of the Federal Reserve System as their primary federal bank regulator. The Federal Reserve is the federal regulator of all bank holding companies, savings and loan holding companies, and state-chartered banks that are members of the Federal Reserve. National banks are supervised and regulated by the Office of the Comptroller of the Currency (OCC) and benefit from federal pre-emption of state banking laws, except for certain state consumer protection laws. For federal consumer financial protection laws, the Consumer Financial Protection Bureau (CFPB) has primary supervisory and enforcement authority over banks with $10bn or more in total assets and certain designated nonbanks; other federal banking regulators, including the OCC, supervise and enforce such laws with respect to banks with less than $10bn in total assets.

To steer the institutions they oversee, regulators often release guidance or interpretations covering matters of particular importance, recently including the use of AI and other technological advances. While acknowledging the benefits AI and automated tools offer to banks, their customers and society, the agencies also want banks to be attuned to – and to manage – the risks these technologies pose.

Antidiscrimination and other consumer protections

A particular emphasis of the regulators has been the potential for ADM to produce outcomes that discriminate against historically disadvantaged groups, especially those protected under the Fair Housing Act and Equal Credit Opportunity Act (ECOA). Regulators have undertaken various efforts to ensure that banks have risk management and quality control processes to prevent unlawful discrimination from automated lending processes. For instance, in June 2023, several banking agencies proposed that banks develop quality control standards for the use of automated valuation models by mortgage originators and second-market issuers in real estate valuations. A final rule has not yet been adopted. Additionally, the regulators expect banks to manage AI risks to avoid any other illegal credit practices that would interfere with their meeting the credit needs of all the communities they serve, as required by the Community Reinvestment Act.

The Federal Trade Commission (FTC) and CFPB share enforcement of consumer protection laws combatting fraud, deception and unfair business practices, with the FTC overseeing for-profit entities like mortgage companies, but not banks, and the CFPB or other bank regulators responsible for banks. The FTC takes the position that algorithmic discrimination against protected classes can constitute an unfair or deceptive act or practice in trade or commerce in violation of section 5 of the FTC Act. Likewise, the CFPB claims that algorithmic discrimination against protected classes amounts to an unfair, deceptive or abusive act or practice in connection with consumer financial products or services under sections 1031(a) and 1036(a)(1)(B) of the Dodd-Frank Act. A court has ruled against this CFPB interpretation in a challenge from much of the industry; however, the agency is appealing. The FTC insists that lenders test and monitor whether their models result in potentially unlawful discrimination, even where the lenders do not collect protected class information. In the agency’s words: “If, for example, a company made credit decisions [using AI] based on consumers’ Zip Codes, resulting in a ‘disparate impact’ on particular ethnic groups...that practice [could be challenged] under ECOA.” For its part, the CFPB has cautioned that “ECOA and [the implementing] Regulation B do not permit creditors to use complex algorithms when doing so means they cannot provide the specific and accurate reasons for adverse actions” (which the statute mandates). Instead, the CFPB has issued guidance for how lenders can supply the necessary explanations.

Beyond discrimination concerns, regulators also worry that use of AI systems may reduce the availability and accuracy of information needed to empower consumers. The Fair Credit Reporting Act (FCRA) requires certain disclosures to potential borrowers and others regarding credit (or background) checks and further disclosures if the report will lead to an adverse action. Lenders and others subject to the FCRA have to identify the ‘key factors’ that affect the outcome of a covered decision based on a credit score. Certain AI models are sometimes described as ‘black boxes’ because of the lack of understanding of how the data entered into the model leads to its outputs. As with the ECOA, using a ‘black box’ algorithm may be inconsistent with FCRA obligations to identify the key factors.

The CFPB also has warned that: “In instances where financial institutions (FIs) are relying on chatbots to provide people with certain information that is legally required to be accurate, being wrong may violate those legal obligations.”

Safety and soundness and third-party risks

Banks are expected to have well-developed risk-management programmes that address the models used in the various aspects of their operations. Poorly conceived or overly aggressive models can lead to a bank being deemed to be operating in an ‘unsafe and unsound’ manner. Regulators have published supervisory guidance for banks’ model risk management. While not specific to AI models, this guidance certainly applies to them. The guidance includes expectations for bank boards and senior management: “Senior management, directly and through relevant committees, is responsible for regularly reporting to the board on significant model risk, from individual models and in the aggregate, and on compliance with policy. Board members should ensure that the level of model risk is within their tolerance and direct changes where appropriate.”

Among other operations, AI systems can help banks with their compliance obligations under the Bank Secrecy Act’s anti-money laundering and know your customer requirements and the US sanctions regime, as well as with cyber security, fraud detection, and other safeguards against malefactors. Because AI systems make predictions based on probabilities, they sometimes are incorrect. Banks, therefore, cannot leave compliance and other defences to AI. The government expects FIs to have humans working with the AI systems to catch their mistakes.

To the extent that a bank relies on an AI tool developed or implemented by a third party, including an affiliate of the bank, federal regulators have adopted guidance on managing risks associated with those relationships. A bank remains responsible for the operations and activities that it outsources, and failures by a service provider are viewed as failures of the bank. Like the model risk management guidance, the third party risk management (TPRM) guidance specifically addresses the role of bank boards. It states that boards of directors ultimately have the onus to oversee TPRM and should hold senior management accountable to engage safely in third-party relationships. The guidance stresses the importance of conducting periodic independent reviews to assess the adequacy of TPRM processes and developing documentation and reporting standards and controls for such processes.

Concluding thoughts

US banks, of course, are subject to numerous other federal, state and local laws affecting their use of AI. As just one example, they must comply with a growing number of state privacy laws. And a gushing stream of new statutes and regulations are under consideration at all levels of government in the US. Bank management, and the boards to whom they report, should ensure their institutions have a comprehensive programme for managing AI risks, but one that is flexible enough to keep up with the rapid changes in law and technology. For the time being, at least, bot bankers cannot be left to police themselves.

 

Peter J. Schildkraut is co-leader of the technology, media & telecommunications industry team, Amber A. Hay is a partner and Paul Lim is an associate at Arnold & Porter. Mr Schildkraut can be contacted on +1 (202) 942 5634 or by email: peter.schildkraut@arnoldporter.com. Ms Hay can be contacted on +1 (202) 942 5259 or by email: amber.hay@arnoldporter.com. Mr Lim can be contacted on +1 (212) 836 7890 or by email: paul.lim@arnoldporter.com.

© Financier Worldwide


BY

Peter J. Schildkraut, Amber A. Hay and Paul Lim

Arnold & Porter


©2001-2024 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.