Given regulatory uncertainty, banks will take tentative steps to embrace AI

Michael Barr
Michael Barr, vice chair for supervision at the Federal Reserve, said Tuesday at the DC Fintech Week conference that artificial intelligence could offer banks new efficiencies, but noted that "risks in terms of fairness, in terms of concerns about market manipulation and in terms of efficiency on the other side" make regulators cautious about what applications they might allow for AI.
Bloomberg News

WASHINGTON — Experts say the ever-evolving use cases  for generative artificial intelligence could spur banks to ramp up their adoption of the technology exponentially in the coming years. But vague regulatory concerns about the technology may hamper that adoption, experts say, until there is some clarity about what is permitted and what is not.

Federal Reserve Vice Chair for Supervision Michael Barr noted in remarks at the DC Fintech Week conference Tuesday that the Fed is watching generative AI used in trading at financial institutions and deliberating over its potential trade-offs.

"Today, most algorithmic trading is very widespread and as a form of using machine learning to engage in trading activities," he said. "That can generate, again, benefits in terms of efficiencies, but also risks in terms of fairness, in terms of concerns about market manipulation and in terms of efficiency on the other side, so we just have to watch — in a very long-term way — the potential risks and benefits of artificial intelligence."

Acting Comptroller of the Currency Michael Hsu also spoke at the conference, and said he believes the lively public discussion of AI in the financial industry may be getting ahead of the effective reality on the ground at banks.

"To the extent that banks are engaging or have been engaged — especially on machine learning — it's really been a crawl, walk, run approach, with good controls, really focused on use cases that make sense and that meet the safety and soundness and fairness standards that we have," he said. "We expect banks to continue to take that approach because it seems like the right one."

Davis Polk partner Gabe Rosenberg, who helps banks and other kinds of institutions adapt to the changing AI regulatory environment, said that until regulators fully understand the back-end mechanics of AI models and uses, they will be unlikely to give their blessing.

"If you think about regulation and examinations, generally they are a study of what financial institutions do and how to incentivize them to do the right things, but if you don't really know why they're doing what they're doing, that becomes really difficult," he said. "I think there are concerns that the AI would be making decisions based on the wrong things, like that they'd be looking at correlation between events rather than causation."

Clifford Chance lawyer Young Kim agreed, saying that regulators will need more clarity around the mechanics of generative AI models before they can gain widespread adoption.

"The ability to explain how an AI model arrives at a particular result is a foundational question that needs to be answered before a company can build a proper risk management framework around its usage," he wrote in an email. "We think this is one of the primary reasons why AI deployment has been primarily relegated to back-office automation — [for example,] complaints management — or front-end applications [such as] chatbots rather than across the entire bank supply chain."

Kim also raised another widely discussed concern for policymakers: the potential for AI to replicate human biases, something he thinks regulators are still wrapping their heads around.

"The banking agencies have paid token acknowledgment to the benefits and use cases of AI but remain principally focused on the risks they pose to consumers and the banking system more generally," he wrote in an email. "Of particular concern is the potential for AI to be used for credit decisioning and account onboarding in a manner that heightens existing biases and furthers the exclusion of underbanked groups."

Michael Hsu said during the DC Fintech Week conference today that while AI holds plenty of opportunity, regulators are eyeing various risks: bias, discrimination and consumer protection issues, that come with predictive models trained on existing data. 

"Those [concerns] are very live, and the banks know that, supervisors know that," he said.

"It's a prediction machine … based on historical data and so to the extent that that historical data has biases in it, we want to make sure that we're not just moving into a world that's locked everybody into the past. That's why we focus on things like explainability and governance."

Rosenberg noted the potential for a wide swath of banks to make similar decisions as a result of relying on similar AI models — a phenomenon often referred to as herd behavior. While not a new risk, Rosenberg said regulators will be eyeing this closely, noting remarks recently made by Securities and Exchange Commissioner Gary Gensler calling for regulators to be attentive to the potential for herd behavior.

"Herd behavior is a concern anywhere traders all look at the same signals," he said. "If everyone does the same thing at the same time, that can lead to both bubbles, as we see in the meme stocks and crashes, when people do the same thing," he said. "It reminds me of the run on repo during the 2008 financial crisis, where everyone had set up the same risk web and [did] the same thing at the same time, which caused a lot of problems."

Kim nodded to the fact that regulators have expressed interest in heightening oversight and increasing scrutiny of third-party risk management as a critical aspect of regulators' approach to AI as it becomes more integrated into the banking industry. 

"We know third-party risk management has been a heightened area of focus for supervisors for quite some time, and would expect any concerns around concentration risk to be flushed out during the examination cycle," he noted. "The agencies also have levers they can pull under Dodd-Frank as well as the Bank Service Company Act to directly pursue AI service providers to banks."

Ian Katz, managing director of Capital Alpha Partners, thinks regulatory concerns about herd behavior are warranted, but noted that regulators have dealt with the phenomena before with other technology. He expressed doubt that any overarching regulation could accomplish as much as individualized prudential diligence by bank examiners at an agency could.

"One could argue that herd behavior was around before AI and will always be a concern," he said. "I think AI may require more work by bank examiners — examiners on the ground will have to look at behavior and see if there's herd behavior. I'm not sure that overarching regulation would accomplish what on-the-ground scrutiny would."

James McPhillips, a lawyer at Clifford Chance, agreed, saying that in the banking industry's ideal world, integrating AI into the regulator's current examinations could assuage regulators' concerns to the extent that they forgo more prescriptive regulations.

"Banks are hopeful that there will not be specific AI rules that get rolled out, but rather that regulators will view AI technologies through existing regulations," he said. "It is very early on in both the technology's capability, but also how regulators are viewing generative AI, and so I think that is why a lot of banks are approaching this with caution because a lot of this is very fluid."

Rosenberg noted that there is still significant uncertainty as to the extent to which generative models can predict unprecedented edge-case scenarios, something he said even human examiners struggled to anticipate in the 2008 financial crisis and bank failures this year. AI is trained on historical data, so it may have a blind spot toward less-predictable outcomes. 

"By definition, tail risks either are super uncommon, or there's something we just think about and don't know about in advance and don't know how they're gonna manifest [themselves]," he said. "Just like the financial regulators, the industry sees things in retrospect that should have been obvious, people are worried the same will happen to AI and won't be able to react to them in real time."

So while uncertainty abounds, McPhillips says some kinds of banks are more hesitant than others to lean into AI models. 

"Some banks are still very reluctant to use AI that would be put into production — and meanwhile, some banks are trying to make sure they're not losing a competitive advantage and want to deploy generative AI tools," he said. "Even with that wide range of approaches right now, though, the vast majority are erring on the side of caution."

Those who have been particularly aggressive include fintech companies, whose online models and tech heavy dispositions, he noted, may make them more bullish on adopting generative AI.

"[Fintechs] view generative AI as really a matter of survival," he said. "If they don't adopt general AI tools into those fintech products they could be left behind, so I think at least in the fintech ecosystem, they've been quite aggressive."

And while the bank regulators have yet to formalize any rulemakings on generative AI, the Biden White House recently announced an executive order that, among other things, called for companies that provide widely used models to conduct and report on safety tests and mitigate bias in various AI use cases. The declaration indicated the administration and Congress could soon act to move legislation on the matter. 

"The actions that President Biden directed today are vital steps forward in the U.S.'s approach on safe, secure, and trustworthy AI," a fact sheet accompanying the executive order said. "More action will be required, and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation."

For reprint and licensing requests for this article, click here.
Regulation and compliance Politics and policy Artificial intelligence
MORE FROM AMERICAN BANKER