Washington, Banks & AI: Here’s How to Get Ready for More Scrutiny

Don't wait for a mandate. Get your bank started on developing a policy for use and risk management of all types of artificial intelligence. Tailor it to what your bank is already doing and keep it up to date as new types of AI get added to your mix.

Across the financial services business, AI-powered solutions have taken center stage as the future of both internal and client-facing systems. But cutting-edge products and solutions often draw regulatory scrutiny. Headlines regarding new guidance or enforcement priorities seem almost as common as the headlines purporting revolutionary innovations.

Institutions are left wondering how to best prepare themselves for reaping the rewards of technological advancement, while still protecting themselves, their customers and the financial ecosystem. Not much has been heard publicly from the traditional banking regulators, but institutions have a good deal out of Washington that they can review to begin gearing up for the challenges.

Opening Rounds on AI from Washington

President Biden’s October 2023 “Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence” is just one example of how regulators and legislators are keeping AI at the forefront of their efforts.

The executive order outlines a plan to safely promote innovation. It requires developers to disclose safety testing results, government agencies to set standards and assess risk, and Congress to pass legislation to protect Americans from the threats of AI, among other directives. On April 1, the U.S., via the Commerce Department, and the U.K. signed a memorandum of understanding to jointly develop tests of the safety of advanced artificial intelligence models.

Publication of the executive order, coupled with guidance that reinforces regulators’ interest in AI implementation, has led to many institutions taking a careful approach to integrating AI into their processes and services.

A banking-specific example of the latter is the Consumer Financial Protection Bureau’s circular concerning adverse action notification requirements and the proper use of the CFPB’s sample forms provided in Regulation B. That document states that when creditors take adverse action they must provide consumers with accurate and individualized explanations, even when using AI systems for decision making.

On the other side of the spectrum, however, there is pressure on firms to stay up to date on trends. When competitors flaunt innovation, institutions must decide what level of risk they are able to tolerate.

For example, the Division of Examinations of the Securities and Exchange Commission made it clear in its 2024 Examination Priorities that it would be focusing on AI. On March 18, the SEC settled the first AI-related actions against two firms charged with “AI washing” — creating publicity based on false claims about AI expertise.

SEC Chair Gary Gensler said, “We’ve seen time and again that when new technologies come along, they can create buzz from investors as well as false claims by those purporting to use those new technologies. Investment advisers should not mislead the public by saying they are using an AI model when they are not. Such AI washing hurts investors.”

Whether institutions fall on the “erring on the side of caution” end of the spectrum or the “staying competitive in a rapidly modernizing market” end, it is clear that a best practice is to have a firm-wide stance on AI and a cohesive go-forward plan that can be tuned to both different types of technologies and developments in the field.

For many institutions, this means creating a written policy outlining governance and procedures around AI. With the constantly shifting regulatory landscape, it is imperative to treat this as a living document.

Read more: In The Wake of Google Gemini’s Chatbot Debacle, An Object Lesson for Banks

Webinar
REGISTER FOR THIS FREE WEBINAR
The Power of Localized Marketing in Financial Services
Enhance your brand's local visibility, generate more leads, and attract more customers, while adhering to industry & compliance regulations.
Thursday, May 16th at 2:00pm EST
Enter your email address

An Unusual Place for Banks to Start Looking

Though many federal agencies have begun issuing guidance around specific uses for AI, one agency stands out in offering resources to help institutions create internal policies on the many use cases and applications for AI technologies — the National Institute of Standards and Technology (NIST).

In January 2023, NIST published the Artificial Intelligence Risk Management Framework (AI RMF 1.0). In NIST’s words, it is “intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.” The framework provides valuable, tangible metrics for institutions to incorporate into their internal AI policies.

The first section of AI RMF 1.0 discusses how to frame AI risk. There many types of AI, and ways to implement AI-powered solutions. So adopting a process that assesses risk on a case-by-case basis is critical.

The document recommends looking specifically at three types of harm:

1. Harm to people. The ways in which AI can harm consumers has been at the forefront of regulatory and legislative action in recent years. Regulators have addressed concerns such as “black box” decisioning in credit underwriting that could lead to fair-lending violations, or mortgage lending decisioning that could lead to redlining.

Also included in this category are issues with chatbots, such as those related to privacy and security, and instances where a chatbot might provide misleading or incorrect information to a consumer.

Keep this in mind: When considering a potential “harm to people” impact, it’s critical to look not only at possible damage to an individual, but also to entire groups and communities.

2. Harm to an organization. AI offers opportunities for creating efficiency and streamlining processes within an institution by aiming to make computers proficient in human-like tasks, such as perceiving, judging, decision-making or communicating. But this same human-like nature can pose risks to firms.

Every decision an institution makes needs to be reportable, reviewable and understandable — including the decisions a machine might make on behalf of the institution.

Just as an institution is accountable for an employee or group of employees acting on its behalf, it must also be accountable if a system that uses AI fails.

In a June 2023 spotlight, the CFPB warned that “Financial institutions risk violating legal obligations, eroding customer trust, and causing consumer harm when deploying chatbot technology.” The document made it clear that chatbots must comply with all applicable federal consumer financial laws — and penalties for violations — that apply to the original processes, and entities may be liable for violating those laws when they fail to do so.

Clearly not only could banking institutions face regulatory consequences for poor AI implementation, but they risk damage to their reputation, business operations, security or financial condition.

3. Harm to an ecosystem. This is challenging due to an inherent level of unknowability arising from a myriad of interconnected and interdependent elements. Despite that, there certain concrete risks to an ecosystem remain that institutions can appraise.

For example, training AI models, and especially large language models, can use substantial amounts of energy. That can have significant impacts on a firm’s CO2 emissions. Other physical factors may include data center infrastructure, e-waste or supply chain impacts.

For certain types of AI applications, it may also be necessary to consider indirect environmental impacts, such as the potential for algorithmic trading and risk assessment to influence investment decisions and resource allocation that may not prioritize sustainability or environmental stewardship.

Read more: Why it’s Time for Banks to Hire a Chief AI Officer — and What That Looks Like

How to Use NIST’s Process to Frame AI Policies

This broad approach to ascertaining and addressing risk makes the foundation for the framework NIST calls the “AI RMF Core.”

Banking institutions can use that core to manage AI risk while allowing for flexibility across AI use cases. NIST recommends that AI risk management should be timely and performed throughout the lifecycle of an AI application. The document indicates that this process include “the views of AI actors outside the organization.”

The NIST core consists of four functions:

1. Govern. This function is overarching and meant to be infused in each of the other functions. Govern includes ensuring institution-wide policy transparency, cultivating a culture that empowers individuals to make responsible decisions, and engaging teams to cooperate on procedures and practices. Accountability constitutes a core tenet of governance, as well as workforce diversity, equity, inclusion, and accessibility, and ongoing assessment of third-party and supply chain risks.

2. Map. The Map function requires an internal team with a variety of perspectives to assess and identify the AI’s context, categorization capabilities, targeted usage, goals, and expected benefits and costs compared with appropriate benchmarks.

The Map stage also requires identification of risks and benefits for all components, including third-party software and data. It should also consider potential impacts to individuals, groups, communities, organizations and society.

3. Measure. The Measure function consists of assessing AI risk from three perspectives: a qualitative approach, a quantitative approach, and a mixed-method approach. This includes identifying methodology and metrics for risk measurement, evaluating systems for trustworthiness, tracking identified risks, and gathering and assessing feedback.

A key point from NIST: “Characteristics of trustworthy AI systems include: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed.”

4. Manage. The Manage function pulls together the information gathered in the first three steps to help the institution work to reduce AI risks holistically and across the AI’s entire lifespan. In this function, ongoing monitoring plays a key role, and risks must be responded to when they arise.

Read more:

If You Are Building Up AI, Policies Are a Must

The NIST framework provides an encompassing strategy for creating an internal AI policy. But regardless of whether an institution decides to follow this specific structure, remaining proactive is always the best strategy.

This means not only monitoring and tracking regulatory and legislative changes, which can often be reactive and behind the pace of modernization.

But institutions must also create a governance framework that can be applied to multiple use-cases. With the financial services sector at the forefront of technological advancement, and AI in the spotlight — for both its potential benefits and potential drawbacks— it is critical that institutions have a solid plan in place to adopt and govern these new and innovative tools.

See all of our latest coverage on artificial intelligence in banking.

This article was originally published on . All content © 2024 by The Financial Brand and may not be reproduced by any means without permission.