Visa, AI And The $25B Fraudsters Never Got

Cybercrime is both pervasive and expensive, as modern problems go.

According to reports, citing the University of Maryland, there is an attempted hack every 39 seconds. By some estimates, cybercrime will have cost the world $6 trillion by 2021 in damaged and destroyed data, stolen money, lost productivity, theft of intellectual property, theft of personal and financial data, embezzlement, fraud and post-attack disruption to the normal course of business.

As Visa’s Melissa McSherry, SVP and global head of credit and data products, told Karen Webster in a recent conversation, fraudsters are persistent and often sophisticated, and VisaNet is a favored target — as it has been for decades.

“The fraudsters are very creative — and they use high-tech tools and artificial intelligence [AI] as well. They are constantly testing out network security patterns, and taking a very structured approach to [try] to break through our defenses. They are always learning from us, as we are learning from them,” she said.

The trick is to make sure Visa’s education about fraudsters stays ahead of the fraudsters’ education on VisaNet — and, as the world learned yesterday (June 17), it is a trick that Visa is pulling off convincingly. The card network announced that its AI-based Visa Advanced Authorization (VAA) security product has helped issuers prevent an estimated $25 billion in annual fraud. Overall, Visa’s global fraud rates are at a historic low at less than 0.1 percent.

The results, McSherry told Webster, point to the iterative approach Visa has applied to its cybersecurity priorities in the more than two and a half decades since Visa first incorporated neural-net (AI) technology into VisaNet as a fraud-fighting tool, as well as to how foundational AI modeling is across the entire suite of security-connected products Visa offers. However, they also point to the many miles still left to go as Visa — with its partner merchants and issuers — faces the next half of the challenge. Keeping bad transactions out is important, she said, but making sure good ones go through is arguably even more important.

The AI Foundation

There might be a temptation to look at the entire box of security tools Visa throws at cybercrime and wonder if AI is being given too much credit — when things like Chip-and-PIN cards and tokenization have also been part of that 0.1 percent result.

McSherry acknowledged that point, noting that — in a world where hacks are coming at nearly two a minute — a deep and diverse toolbox is a good thing. Many of those hacks are what she and Webster called “amateur hour” attempts, which are about as effective or worrisome to Visa as a fork-armed man would be to a fortress. Yet, when the criminals go pro (and are backed by international, organized crime or, in some cases, state actors), the efforts get more sophisticated and multi-tiered. The uses for the purloined data and funds, she noted, also get more socially corrosive.

However, even in that diverse set of tools, she told Webster, “Al is fundamental to everything we do.”

“At every layer (in authentication, authorization), there is some sort of modeling as part of the security toolset,” she explained. “And in every one of [those layers], at least one of those models is leveraging AI to make it better. It is pervasive in how we prevent fraud.”

Visa can never kick back and rest, she noted, as fraudsters don’t rest. However, having been at this kind of data-focused, fraud-fighting for over two decades has its advantages, particularly in developing multi-leveled responses.

“The question we now have to use that data to answer is, ‘How do we prevent fraudsters from disrupting transactions?’” McSherry said, and the answer to that is bigger than just stopping that bad ones.

Letting More Good Through

While saving $25 billion in fraud losses is a good thing, it is an accomplishment that Visa can take no time to focus on because it’s on to the next problem: battling false declines. Customers, she noted, aren’t terribly understanding when their cards get declined for any reason but insufficient funds/credit. If they don’t understand why a card doesn’t work, they nearly instantly stop trusting it, and the card drops to the bottom of their wallets.

What Visa is now explaining to clients is that many transactions it is ranking with low-risk scores are getting declined.

“We say this isn’t just about being safe. This is about customer experience, and keeping loyalty and trust in place. It really helps them to see that broader context, and [reevaluate] how their fraud rules are balanced,” she said.

Moreover, as AI increasingly expands into background risk-based authentication contexts, Visa is able to unlock much more data and build that into its models. As this happens more, pushed around by open data regulation in Europe, Visa can build more of that authentication data into risk models — it is seeing fraud rates go down and authorization rates go up.

This is a better experience for the consumers, which asks less of them, yet keeps their data privacy safer than it was before.

It’s a lot to do — and a lot of integration and persuasion. In cybersecurity, McSherry noted, one can never take a low rate of fraud for granted because there is a virtual army of hackers always hoping to turn the tide in the battle. However, it helps that Visa has been guided by the same philosophy since 1993 in pushing its AI efforts — and it’s paying off.

“Our goal is to make sure every good transaction goes through, [and] every bad one is declined,” she said. “We work very hard at preventing fraud, but we also [work] hard to know what is low-risk enough that it should be passed through.”