Banks embrace AI, but not for customer communications: Survey

Chatbot conversation Ai Artificial Intelligence technology online customer service.Digital chatbot, robot application, OpenAI generate. Futuristic technology.Virtual assistant on internet.
A recent survey showed a majority of banks use generative AI, mainly for tasks like writing emails and detecting fraud, but few say they have built customer service chatbots with it.
Adobe Stock

A recent survey showed that most leaders at banks and credit unions use generative artificial intelligence, like ChatGPT or Google Gemini (previously Bard), for either work or personal tasks, but few have built dedicated products at their institutions leveraging the technology.

In the survey conducted by American Banker's parent company Arizent, 61% of the 127 bankers who responded said they were using generative AI either in their personal or work life, including 28% who said they used it in both contexts. By and large, banks that use the technology do so for general office tasks (45%); fewer use it for direct client or customer communications (23%).

Among respondents who said their institutions were using generative AI or planned to do so, the most common uses after general office productivity were improving emails and marketing communications (46%) and helping to fight fraud (36%). Less common uses included recruiting and hiring new employees (11%), onboarding customers or clients (12%) and onboarding employees (15%).

Banks were already using AI prior to the rise of generative AI in 2023, especially as a tool for monitoring transactions for suspicious and illegal activity, including fraud. These models are trained to identify suspicious transactions by analyzing transaction amounts, the accounts involved, the typical transactions made by those accounts and other network- and numbers-focused data.

Generative AI has supercharged financial crime detection in other contexts, particularly email, where existing transaction monitoring systems are not equipped to operate.

"There's an inherent signal in every email that's created," said Ryan Schmiedl, global head of payments, trust and safety at JPMorgan Chase, at a conference in July. "Actors that are trying to create fraudulent emails tend to basically use different patterns and you can learn those patterns through machine learning."

This is where generative AI has exploded in utility in recent years. Whereas transaction monitoring systems excel in recognizing patterns across networks of transactions, large language models (a type of generative AI) excel in recognizing patterns in language, akin to reading and comprehending text.

It's also one of the areas where attitudes toward professional use of AI are warmest. In Arizent's survey, 62% of respondents said they trust AI to be mostly or wholly responsible for detecting and mitigating fraud. Respondents said they would trust only two other tasks to AI more: assisting employees with routine inquiries (72%) and assisting customers with routine inquiries (69%).

In banking and other industries, assisting with inquiries is indeed one of the most common use cases for generative AI, particularly in the form of chatbots and copilots. Large language model creators including OpenAI, Anthropic, Aisera, Kasisto and others have stood up enterprise products that offer companies varying levels of specialization to their industries.

For the most part, banks have not implemented language models this way. For example, despite the large percentage of respondents who said they would trust generative AI to assist with routine inquiries, only 29% said they were actually using (or planned to use) generative AI to help contact centers or customer service representatives answer questions.

This gap between what banks would trust AI to do and what they plan to let it do is likely due to regulation. The Consumer Financial Protection Bureau warned in June that poorly deployed chatbots could harm consumers, and the Federal Trade Commission began an investigation the next month into the data security practices at OpenAI.

Overall, bankers have several reservations about putting products built on generative AI in front of consumers. Still, some larger institutions and fintechs have implemented LLM-powered employee and customer assistance.

Morgan Stanley has partnered with OpenAI for over a year to have a language model churn through hundreds of thousands of pages of wealth management documentation to help employees sift through and understand the vast repository of knowledge. As for customer-facing implementations, Dave announced in December that it had launched DaveGPT, a customer service assistant powered by generative AI.

For reprint and licensing requests for this article, click here.
Technology Artificial intelligence
MORE FROM AMERICAN BANKER