Regulators say they have the tools to address AI risks

Martin Gruenberg
Federal Deposit Insurance Corp. Chair Martin Gruenberg said in a roundtable discussion with fellow banking regulators Friday that banks' use of artificial intelligence "has to be utilized in a way that is in compliance with existing law, whether it's consumer protection, safety and soundness or any other statute."
Bloomberg News

WASHINGTON — Bank regulators said on Friday that while they are actively exploring the risks that could emerge from financial institutions' reliance on artificial intelligence, existing tools and laws are sufficient to prevent those risks from harming consumers or the financial system. 

"Whatever the technology — including artificial intelligence — that is going to be utilized by a financial institution, that has to be utilized in a way that is in compliance with existing law, whether it's consumer protection, safety and soundness or any or any other statute," said Federal Deposit Insurance Corp. Chairman Martin Gruenberg. "Our agencies currently have authority to enforce those laws over the technology." 

The comments came in the form of a roundtable discussion between Federal Reserve Vice Chair for Supervision Michael Barr, Federal Deposit Insurance Corp. Chair Martin Gruenberg, acting Comptroller of the Currency Michael Hsu and Consumer Financial Protection Bureau Director Rohit Chopra at the National Fair Housing Alliance's Responsible AI Symposium in Washington, D.C. 

AI has been a growing area of concern for federal financial regulators. A recent report by the Financial Stability Oversight Council identified AI as an emerging risk for the first time in December. The four federal bank regulators all sit on FSOC, a financial oversight body created by the Dodd-Frank Act, and the report recommended member agencies monitor the rapid developments in the adoption of artificial intelligence. Regulators have raised concerns previously with AI, including its potential for replicating bias, the lack of explainability of AI algorithms and the risks of herd behavior

CFPB Director Chopra said many of the authorities in regulators' anti-discrimination rules are principles-based and flexible enough to be applied to new technology. 

"There is no 'fancy technology exception' to the Equal Credit Opportunity Act, the Fair Credit Reporting Act and others and we don't really care how you market your special technology," said Chopra. "If you can't give an adverse action notice, then you can't use it; if you can't give those reasons as to why someone has been denied, you can't use it — and yes, you will be accountable."

Gruenberg also noted that banks must be conscious that the risks of AI used by third parties a bank does business with are the bank's own risks. Regulators recently updated guidance on how banks must manage risks associated with third-party entities. 

Barr also reiterated Gruenberg's point about third parties and noted that firms that utilize newer techniques of artificial intelligence like large language learning models need to make sure that the tech complies with the agencies' model risk management expectations

Barr also noted that fair lending rules — including those pursuant to the Community Reinvestment Act — are based on principles of fairness, and as such are not as concerned with whether a bank, model or third party discriminate, but rather with the emergence of demonstrable harm. 

"They can't say, 'This other institution did it, it wasn't me — I didn't understand what they were doing, it's not my fault,'" he said. "That's not the way the law works. The banks are responsible for ensuring compliance with the law."

Barr noted such a principle can also apply to the Community Reinvestment Act.

"We don't — in [examining firms for CRA compliance] — distinguish among the reasons for why they have weaknesses in their fair lending program," he said. "We're really technology agnostic on that. If there's discrimination going on at the bank, they shouldn't be getting a good rating."

Barr said when considering AI's application to lending or underwriting decisions, an AI tool must be explainable in order for regulators to determine whether its use is compliant. Good outcomes are not enough to ensure compliance, he said. 

"If you are the CEO of a bank and you throw darts at a dartboard to decide who to make loans to and it turns out those loans don't default, that's still not safe and sound banking," Barr said.

Chopra highlighted some of the ways his agency is examining actions related to existing use cases of AI at financial institutions. One example he cited was the use of chatbot models that imitate human behavior, which could be a concern if those communications are misrepresented as human. He pointed to a study his agency conducted of many big banks' use of chatbots, which found that chatbots are often given female names and use imagery like dotted word clouds to imitate the kind of markers humans see when a counterparty is typing a text message.

"We have found that many of them are almost faking that they're typing, so what they're trying to do is create that type of [humanlike] experience," he said. "But the reality of this is that it involves a whole other set of challenges and we have made clear that if you're giving deceptive information through your generative AI we have a lot of supervisory tools that we're using" to determine whether those actions comply with the law.

Gruenberg also seemed to speak directly to Congress at one point, noting that any legislation considered should be mindful of the way it may alter these existing authorities. While legislation could empower the agencies, he indicated it must be careful to not contradict existing regulatory controls.

"You really want to be cautious and think about legislation in this area, without first considering what will be the impact on our existing statutory authorities," he said. 

For reprint and licensing requests for this article, click here.
Regulation and compliance Politics and policy Artificial intelligence
MORE FROM AMERICAN BANKER