BankThink

Regulators must root out bias in AI-based lending

The fundamental question facing financial regulators when they consider the use of artificial intelligence and machine learning in banking is this: Do the rewards outweigh the risks? That’s doubly true when those tools are used to underwrite consumer loans.

Just last month, the chairs of the House Financial Services Committee and the Task Force on Artificial Intelligence sent a letter to the heads of five federal financial services regulatory agencies reminding them to keep up with the advance of AI tech and to make sure they police the industry for biased algorithms that could harm consumers and businesses.

It’s pretty clear that the move to AI and machine learning is well underway in financial services. A 2020 survey of lenders found that 88% of them plan to increase their investment in AI in the coming years specifically for use in evaluating credit risk. Credit union executives in a recent survey put AI lending technology as a top investment priority in 2022.

Research is also underway. For example, FinRegLab just published an in-depth report on the data science behind these emerging tools and technologies. Because AI and machine learning models can process more data, they tend to be better at assessing risk than humans alone, and therefore better than the older technologies they are replacing. That generally means less risk for financial institutions and greater access to credit for consumers.

But, without adequate guardrails, AI and machine learning can mean trouble. The two big issues are, first, whether AI models introduce safety and soundness risk into the financial system, and second, whether they exacerbate bias in lending.

Both problems can be solved if regulators put the right regulatory approaches in place, enabling AI and machine learning not only to overcome those risks, but also to make financial services fairer and safer at the same time. As we see it, these approaches include both updated regulatory guidance as well as regulators’ own adoption of AI-powered tools for use in supervision and enforcement proceedings. Lenders choosing to adopt AI models are also finding ways to set up those guardrails for themselves.

As financial institutions increasingly use AI and machine learning in their operations, they should be prepared to use it to strengthen their compliance and legal functions as well. For instance, if an AI software tool sets up an underwriting model, then the lender’s legal, compliance, and risk teams should use an AI tool, combined with human oversight, to evaluate the model for fairness, stability and accuracy.

Regulators are hard at work on these challenges, having issued at least two requests for information to the industry in the past year. They could sharply increase the effectiveness of their own fair lending examinations by using AI and machine learning models that can better detect patterns of unintended bias in lending, and by guiding industry adoption of a new generation of self-assessment tools. Regtech solutions are coming online to help financial institutions and regulators alike evaluate and mitigate the risks of this new technology while also strengthening fair-lending compliance.

What's exciting about the moment we're in is that win-win strategies exist that can make lending more inclusive and accurate at the same time.

As Rep. Maxine Waters, D-Calif., the chair of the House Financial Services Committee, and Rep. Bill Foster, D-Ill., the chair of the artificial intelligence subcommittee, said in their letter, “Financial institutions using AI have the potential to play a role in offerings to communities who have been neglected in the past,” especially affordable credit for low- and moderate-income communities of color. The result will be a safer and sounder banking system with wider credit access for all.

For reprint and licensing requests for this article, click here.
Politics and policy Artificial intelligence Consumer lending
MORE FROM AMERICAN BANKER