Digital Fraud And AI Battle To The Death

digital fraud

All in all, cybercriminals had a pretty good year. In Q3 2019 alone they made off with cryptocurrency valued at almost $4.5 billion. More than one-third of Americans surveyed report being victims of cybercrime in 2019, and 41 percent of those incidents involved the theft of credit or debit card information. Despite the efforts of law enforcement and the financial community, bad actors are on track to steal more than $35 billion in just the next five years.

The good news is that it’s getting more difficult for fraudsters to ply their trade. FinTechs and financial institutions (FIs) are banding together to create a digital dragnet that never sleeps, because crooks never do.

Evil Corp. vs. the FinTechs

Among the inventive ways that companies are fighting digital fraud in 2020 is Teradata’s new concept of analyzing computer mouse movements, as it turns out that hackers move their mice very differently than regular people. Fraud solution provider SiS-id and payments network Tradeshift partnered on a new mobile app using blockchain-based storage to detect attacks while cutting supplier verification time by 80 percent. SiS-id’s own analysis found that four in five businesses fell prey to hackers successfully posing as legitimate suppliers.

Fraud prevention tech has to scale. That’s because fraudsters (especially state-backed ones, but other groups too) are now launching massive bot attacks that are getting almost as sophisticated as the top security software. To confront this deviousness, fraud solutions provider DataVisor joined forces with credit reporting bureau Experian to supercharge the latter’s CrossCore identity platform. They are augmenting CrossCore’s powerful machine learning with DataVisor’s dCube technology, a proprietary decision tool that picks up on fraud patterns “without the need of training data or labels,” the companies said in a statement.

The artificial intelligence (AI)/machine learning (ML) approach now favored by FIs and FinTechs bypasses traditional verification methods including legacy rules-based systems that are too easily fooled by today’s digital scammers.

‘Clean’ and ‘Friendly’ Fraud are Neither

With use of digitally contrived or cobbled together “synthetic identities” dropping due to better detection, cybercriminals will heavy up on reliable methods like buying hacked credentials on the dark web, sifting through reams of stolen ID and card information until they find the live ones. That’s “clean” fraud. Friendly fraud is even harder to detect because the malicious actor is as likely as not to be the actual, legitimate cardholder. This happens when customers buy something then contact the issuer and demand a stop payment or refund for any number of reasons (“I didn’t order this item” is the most popular, followed by claims of defective merchandise).

That tangle of chargebacks is a potentially huge problem. Visa, for example, can charge merchants up to $75,000 per month in penalties for chargebacks. Retailers with a glaring chargeback problem may even have their card acceptance abilities suspended. It’s a thorny issue because FIs and issuers naturally want to err on the side of their customers. But in the digital fraud scene, merchants have to minimize chargebacks without alienating shoppers.

Even with friendly fraud, chains are counting on smarter solutions like AI and unsupervised machine learning to pick up on new attacks as they occur, stopping losses before they happen.