Bankers embrace new guidelines for ethical AI

IBM has outlined principles to promote transparency — and foster public trust — in the way companies use artificial intelligence. The principles call on banks and other organizations to designate a lead AI official, own up to their use of the technology, explain it and test it for bias.

Bankers say they’re already on it.

IBM unveiled the principles last month at Davos through its new IBM Policy Lab. The goal was to provide guidance for developing intelligent policy that will provide societal protections without stifling innovation.

The company says it hopes its approach will “hold companies more accountable, without becoming over-broad in a way that hinders innovation or the larger digital economy.”

IBM Chief Executive Ginni Rometty, speaking to a standing-room-only crowd at Davos, apologized for the shortage of chairs — the company had not anticipated such interest.

“I remember coming to Davos four or five years ago, producing a paper that said we’re going to write down our principles for trust and transparency in this era, and everyone’s like, 'Oh' — they weren’t that interested,” Rometty said.

What has changed, she said, is that people do not trust technology to make the world better.

“I believe at the heart of this is that people are not sure there’s a better future here, when they take all of this technology into consideration,”
Rometty said.

It is vital, the company said in its announcement, for companies to be forthcoming and clear about possible bias in AI and to let customers know when they are dealing with AI.

How banks are tackling the challenge

The guidelines align with what many bankers were already thinking.

Regions Financial, based in Birmingham, Ala., uses IBM’s Watson technology for customer interaction, to direct customers to the best queue and to support customer service representatives by analyzing a customer’s accounts and suggesting a next best action. It says it has strong governance for AI and monitors its use continuously.

The bank puts the principle IBM lists fifth — test your AI for bias — first in its governance efforts.

“Before using any AI we build very strong AI risk management looking at model fit and data bias,” said Jacob Kosoff, the head of model risk management and validation for Regions. “We test all the models in the bank, including AI-based ones, for accuracy and fairness. We built these strong programs ahead of deploying any AI.”

The model risk management group gets involved early on, he said. A new AI application typically starts with business needs and business owners who decide they need some AI.

“Then they partner with different parties, whether in-house or external," Kosoff said. "We provide guidelines and strong policy and procedures for how to build AI, or what to consider in purchasing.”

Some vendors resist explaining how their AI works, as Cathy O’Neil has described in her book “Weapons of Math Destruction.” But Kosoff said that the field is crowded with AI vendors and the bank can find one willing to have a conversation about how its black box works.

“We want to make sure we can test models for explainability," Kosoff said. "One of our greatest assets is a highly experienced model risk management team. When the bank is in discussion with vendors around the platforms, the line-of-business leaders will come to us, and we can coach them in how to work with the vendors.”

Before any algorithm goes into production, it goes through heavy testing, he said. And monitoring continues.

“The amount of time needed for continuous monitoring might be more than people would think,” Kosoff said. “You need rigorous, ongoing monitoring.”

The bank has developed its own program to train analysts — the Regions Analytics Institute.

“We use it to train our data scientists to develop algorithms in an ethical manner," Kosoff said. "We build with high levels of transparency, explainability and replicability. We teach them how to test for bias."

More than 50 data scientists have been trained through this program, he said. Twenty-five alumni from the risk management group have taken their expertise in AI and governance to other positions throughout the bank.

“We are placing well-trained people into lines of business so when those areas begin to think of using AI we have the right talent in place,” Kosoff said.

Matt Lusco, the bank’s chief risk officer, gave the risk group some simple guidance.

“Integrity, trusted relationships and proven competence," he said. "If you have all three you can be effective risk managers.”

Kosoff expects that AI and machine learning to continue to grow in importance.

“In five years, data and analytics are going to be more and more part of business as usual," he said. "At Regions, we believe models and analytics will be used throughout the bank.”

Banks are mixing the use of AI with judgment by professional staff — in other words, real people.

“We are still in early stages,” said Malcolm Griggs, the chief risk officer of Citizens Financial in Providence, R.I. “The work we have done on machine learning, data science and automated processes as tools to help customers and streamline decisions is tested to guard against unconscious bias that would result in unfair treatment or inappropriate use of data."

“AI-like tools can potentially produce more accurate mathematical results through sophisticated algorithms, but those are not necessarily the right results for a customer, and we have to pay attention to the human and ethical factors," Griggs said. "These are tools that can be helpful, but they are not a substitute for common sense and sound judgment.”

Tony Cyriac, BMO Financial Group’s chief data and analytics officer, said the Toronto bank is already following guidelines similar to those proposed by IBM.

BMO "is working towards accelerated business outcomes using AI while preserving the trust with our customers and internal stakeholders with our Trustworthy AI initiative,” he said. “We are achieving these outcomes by designing for fairness, transparency and explainability as initial priorities, rapidly identifying and mitigating issues posed by the incremental risks of AI and enhancing its development, enterprise risk, privacy, legal and ethics practices to embrace AI at scale.”

U.S. Bancorp in Minneapolis is currently using AI in numerous applications including fraud detection, customer service improvements, directing calls to the best person, process automation and forecasting, according to Tanushree Luke, senior vice president and head of artificial intelligence in the bank's chief digital office. In the future it expects to expand its use of AI in customer services, cybersecurity and explainability, she said.

State of AI at IBM

IBM is working with banks around the world on applications such as direct customer service with text or voice and intelligent routing of calls to the correct resources. Banco Bradesco in Brazil is using Watson to handle a few hundred thousand questions a month with 95% accuracy, and Royal Bank of Scotland is using Watson both for current customer interaction and to assist customer service representatives, according to IBM.

Banking AI is still in its early days, and experience can be uneven from one bank to another, as anyone who has shouted at a (not very) intelligent voice response system knows.

For IBM, AI is not just a single category, said Shanker Ramamurthy, general manager for strategy and market development for industry platforms at the company.

“We call it Watson,” explained Ramamurthy, “and it’s a series of capabilities in AI and machine learning spanning a very broad range including conversation, discovery vision, speed, empathy and language all available through a set of [application programming interfaces] and micro services.”

Watson works across numerous businesses, but it is especially relevant in banking because finance is an entirely digital business, he said.

The company’s consultants work with banks to implement AI.

“We don't just sell Watson as a set of APIs and walk away — our consultants work with clients to ensure it is deployed in the optimal way,” Ramamurthy said. “In all these technologies, knowing what it cannot do is as important as knowing what it can do. We go through customer journeys and see where you can use Watson.”

For reprint and licensing requests for this article, click here.
Artificial intelligence Machine learning Corporate ethics Corporate governance Risk management Consumer banking Commercial banking
MORE FROM AMERICAN BANKER