'Job security forever': Bank leaders on AI and cyber defense

The Sibos sign outside the Metro Toronto Convention Centre
“Now the bad guys can turn around [to a large language model] and say ‘please write me a very good phishing email,’” said Jean-Francois Legault, the deputy chief information security officer at JPMorgan Chase, at a panel on AI and cybersecurity at the Sibos conference in mid-September.
Sibos

Training employees to spot typos or wonky grammar in an email as a sign of phishing is not enough to thwart cyber attacks in the age of advanced AI.

"Now the bad guys can turn around [to a large language model] and say 'please write me a very good phishing email,'" said Jean-Francois Legault, the deputy chief information security officer at JPMorgan Chase, at a panel on AI and cybersecurity at the Sibos conference this week. "When humans are being targeted by other humans, some of it is being simplified by a machine."

Yet artificial intelligence cuts both ways. It intensifies the cyber vulnerabilities that a bank faces, such as deepfake schemes, but it can also be used to counter cyber threats.

"The people are the most vulnerable point," said Reggie Townsend, vice president of the data ethics practice at AI and analytics software company SAS, on the same panel.

The implementation plan the Biden administration released on Thursday seeks to shift liability for data breaches onto "the biggest, most capable" entities. Does that mean banks?

July 13
White House at Night

Legault recommends moving away from focusing on details like typos, which he labeled an oversimplification.

"You need to evolve the way you're looking at a lot of your cyber awareness training," he said. "You are moving toward education — does this make sense? In your business context, is someone asking you to deviate from the process? These things will be harder to detect."

At the same time, there is value in models that detect fraud more skillfully than traditional means, Legault pointed out, as well as using automation to sift through numerous data points that cybersecurity teams need to ingest, including logins, news events and reports from vendors.

SAS has been leaning into a "soft governance" model, said Townsend.

That includes a cross-functional team of executives focusing on the ethical dilemmas related to AI and the formation of "ethics circles." This kind of training unites small groups of employees from around the globe and has them grapple with ethical questions that may go beyond their current roles.

"We won't be able to predict every phishing exercise, but we can give people a level of awareness and tools to deal with some of these things," said Townsend.

One question that came up during the panel was whether the chief information security officer, or CISO, role will become redundant if AI learns how to handle attack vectors and continues to learn from large language models.

Absolutely not, said Legault.

"There is job security forever," he said. "Governance remains key. Having a human who can interpret the information and take action is essential."

In fact, the role will expand in scope, to consider such questions as ethics and responsibility in AI, said Kris Lovejoy, the global practice leader of security and resiliency for Kyndryl, a provider of information technology infrastructure services.

"If we want to be effective in applying this technology in a responsible way, what is the governance and management structure for doing this?" she said. "The organizations I see that are effective are the ones that are tackling that question," and involve a number of roles in these decisions.

In a separate panel on generative AI, Andrew Pade, the general manager of cyber defense operations and security integration at Commonwealth Bank of Australia, explained his institution's method for building a robust and highly trained workforce. The data science team and the bank's most advanced cybersecurity experts consider their past experiences dealing with threats and try to predict the next events so junior employees can act on them.

"All of a sudden, we don't need to leverage the market for high-end skills," he said.

To further instill trust in the institution's defenses, Pade deploys a "purple team" — a blend of "red team" hackers in a company and "blue team" defenders — to track these attacks through their life cycle and fine-tune models if they are not picking up threats. 

In an interview, Andy Schmidt, the global industry lead for banking at IT and business consulting services firm CGI, echoed these points.

"Continuous testing of networks and applications is important," he said. "If you make an upgrade, do an application penetration test to make sure that the door you closed didn't open a window. That's one of the classic issues that you see."

For reprint and licensing requests for this article, click here.
Cyber attacks Artificial intelligence Bank technology Technology Cyber security
MORE FROM AMERICAN BANKER