How To Fight Online Fraud: Ruin Cybercrime’s ROI

Let’s get a few things straight, a couple of points that might go against what every payments and commerce professional knows, but which will be important for the rest of the story.

First, there is no safety — or perhaps even wisdom — in crowds.

Second, friction is good.

Welcome to a different world — in this case, the world of digital thieves, hackers and other such criminals. Stepping into this world is vital for anyone serious about preparing a proper defense against online fraudsters in 2019. Sure, that’s almost a cliché — know thy enemy. But in a new PYMNTS discussion between Karen Webster and Fang Yu, CTO and co-founder of DataVisor, the two of them step further into that world than is often the case, offering not only the latest information about the digital fraudster mindset, but the latest guidance about how to defeat such criminals.

The 2018 holiday season has barely ended and it’s a good bet that retailers and payments players have yet to identify all the fraud that took place. That’s because, according to Yu, the best criminals — the ones who make great livings from their illegal craft, and likely will never have to worry about spending a night in a police station, must less jail — are smart and disciplined enough to make their attacks at low volume (think of that as a nimble flanking move) rather than head-on, high-volume thrusts that would probably compare to an assault on an enemy’s center. The financial sector, specifically, stands as the typical target for such sophisticated criminals, Yu told Webster.

The rest of the year will bring other such attacks, and no doubt new attacks barely imagined even by veteran fraud-prevention experts. And the commerce and payments industry will keep erecting better defenses against these activities. Among the most promising is the deployment of machine learning technology that can recognize patterns that indicate fraud.

Criminal Rings

The dominant source of fraud these days is online fraud, Yu said, and the more data a criminal gang can steal, the better the chance at profit. But an apparent strength of those gangs — safety in numbers, which leads to shared work and knowledge and economies of scale — can also become a weakness when put up against machine learning techniques.

“Machine learning can identify criminal rings, and which ones are associated with other rings,” Yu said.

More specifically, it can identify patterns that signal fraud. Say a specific consumer, for whatever reason — a bonus from her job, an inheritance, lottery winnings — suddenly starts buying some luxury goods after years of not being able to afford such products. An algorithm can determine (with the help of other data points) that the apparent shift in consumer behavior is indeed legitimate, and not an indication that the consumer’s account or data has been compromised.

But machine learning can take in massive amounts of data and make determinations about when such purchases are most likely fraudulent. That’s because criminal gangs tend to operate in ways that come across as coordinated, Yu said. That holds true even when it comes to promotions. Legitimation redemptions of promotions would look to an algorithm as a “very diverse” set of actions, given the differences in the consumers taking part. By contrast, criminals would tend to take actions that appear, mathematically and statistically at least, to almost be the same.

Criminal Patience

Sure, the best criminals — those who have earned their place in the major leagues of digital fraud, and have managed to stay there — know that. That’s one reason they tweak their business to, say, operate at relatively low levels of attacks so as to stay hidden. “They can be very patient as well,” Yu said.

For instance, they may create fraudulent accounts and let them just sit for a matter of time — days, weeks or months. “The longest one we’ve seen is three years,” she said. The use of dynamic IP addresses, fake SMS messages — such as confirmations supposedly from financial institutions, a con that can require significant investment in gear — and other tactics also serve to protect the best digital criminals from detection.

That’s a despairing thought.

But there is hope — if not for defeating such crime, of course, but making it too expensive for many fraudsters to pursue. And that brings us to friction — specifically, the introduction of enough friction into certain online processes that can help reduce that ROI for fraudsters.

No More Cat-and-Mouse

Yu told Webster that the trick is to stop putting so much focus on responding to attacks that have already happened. “It’s like a cat-and-mouse game, and it has short benefit,” she said. Instead, companies and other organizations need to think proactively and long-term — and not only through machine learning and other newer technologies, but ways to share data across business units so that algorithms and the people involved in fraud prevention can have a fuller view of the fraud landscape and better chances as spotting those patterns.

That doesn’t mean putting up digital roadblocks that frustrate consumers and put revenue at risk, of course. It means collecting as much data as possible that can describe consumer behavior and using technology to separate legitimate transactions — even if out-of-the-ordinary — from likely fraud. The best way to stop something often is to reduce the financial incentive. It might not solve every problem, but it’s a solid fighting chance.