Synthetic Identities, Tailor-Made For Fraud

Familiarity doesn’t breed contempt in payments security; it builds breaches — the kind that linger and cost billions of dollars in losses tied to fraud. The synthetic ID may be fraudsters’ weapon of choice these days, but technology can help turn the tide, according to GIACT Systems’ Executive Vice President of Product David Barnhardt. In this world, social media is a (potential) first line of defense.

Equifax may have grabbed the headlines.

But in the world of identification fraud, there’s a greater sea change afoot, one that stretches farther than the impact on any one firm, or even the millions of adults targeted in the most brazen of breaches.

The fact remains that the very data that is readily available across all conduits of transactions – from phone numbers to Social Security numbers – are there for the taking, across all manner of firms, and can be used as building blocks to create sophisticated ways to defraud the innocent.

We’re in an age of smash and grab: one in which thieves smash cybersecurity defenses and grab what they need to make lucrative forays into banks and other firms, pilfering accounts and racking up credit where it shouldn’t be awarded through the creation of synthetic identities.

The term conjures up plasticine figures, perhaps, snaking in and out of cyber realms and making off with the money before anyone notices. In reality, “synthetic identity” refers to an identity cobbled together from disparate sources, building a whole new profile behind which thieves can hide – and perpetrate.

In a conversation with PYMNTS’ Karen Webster, David Barnhardt, executive vice president of product at full-service payment and ID verification solutions provider GIACT Systems, said the estimated costs of synthetic identity fraud to credit card firms will come in at $8.5 billion in credit card charge-offs in the next year alone. This estimate was made well before the costs of the Equifax breach, though, so plan on the actual figure being quite a bit higher.

The emergence of synthetic identities is a testament to the resilience of fraudsters: Take away one avenue, they find another. Chip cards removed at least some of the gains to be had from counterfeit cards – although, according to Barnhardt, “All this fraud has to find a home. Fraudsters aren’t going to simply give up; they will relentlessly look for and find new methods to catch us off guard.”

In terms of process, a synthetic ID is created when fraudsters use the Social Security number of a real person, then change different pieces of personally identifying information (PII), including addresses and birthdates, among others.

Because these are based on valid Social Security numbers, they will usually not be flagged at either the point of new account enrollment or at the transaction. For all intents and purposes, these IDs often will appear and transact as if they are real people since they are based on valid, verifiable numbers – until, that is, the losses are incurred.

The value of using youngsters’ IDs is that they’re relatively clean, record-wise. Cybercriminals use that information to create new names, addresses and phone numbers to generate new profiles in a process that repeats over and over again.

Barnhardt said such scams can be especially lucrative when targeting companies that do not report to credit bureaus, as the synthetic profiles will not get caught for failing to pay on loans, nor will they get flagged when credit is taken out under false pretenses. Once proven among non-reporting companies, the synthetic identities are then unleashed to the mainstream, he explained.

The variations are many, with any permutation of names, addresses or passports “becoming new version 1.2” of individuals, according to Barnhardt.

Financial institutions and card issuers are not the only ones affected; wireless carriers, online retailers, utility providers and others are also at risk.

The newly fashioned profile can lie dormant for six months to a year. Fraudsters study the companies they’re hoping to dupe, looking closely at fraud prevention systems, tokenization efforts and “what companies do to protect themselves,” Barnhardt explained.

Don’t become predictable, he cautioned, especially when it comes to tokenization and other security efforts. Predictability breeds familiarity, and familiarity breeds breaches, losses or both.

All too often, Barnhardt said, companies confuse “true name” fraud – account takeovers – with synthetic identity fraud. The former has been around for a long time. Fraud prevention companies all say they know who the real customers are, and that they can pick out the good guys from the bad.

However, synthetic ID fraud “really changed the game,” Barnhardt said, catching a lot of providers and companies off guard. They now must up their own fraud detection and prevention game to stay ahead of today’s cybercriminals, who seem to always be two steps ahead.

Models to gauge risk and combat synthetic fraud? They are useful in some respects, but ultimately limited, he admitted, because they look at fraud based on past experiences. There’s no validated data from a third-party entity to ascertain if people are who they say they are. Social media presence doesn’t factor into these new synthetic identities, he said, because there is no context as to previous associations or living history.

“[They have] what I call the bubble around them,” noted Barnhardt.

For financial firms that are simply monitoring transaction activity, Webster and Barnhardt agreed it’s too late. Proactivity is key, particularly at the enrollment and onboarding process.

“People are going to look now more than ever,” Barnhardt said, especially to ensure the companies with which they do business are keeping data secure. They want to be validated, he added.

The Equifax breach makes it even more challenging, as cyber thieves can now pick and choose among a vastly increased trove of data – and it’s unclear how they will use it in the next iteration of synthetic identity creation.

Retailers and banks are scared of customer attrition. That fear is enough to keep firms from grabbing onto robust fraud detection so they won’t accidentally turn down a good customer based on false declines.

Friction can be good at times, according to Barnhardt, and “should [only] be invoked when absolutely necessary.” Companies like GIACT can tie mobile phones to networks and activity that might be suspicious, such as pulling funds, wholesale, from bank accounts. “If something goes bump in the night,” he said, the reaction can be swift, and the company can rush to tokenize data.

Avoiding synthetic IDs at the point of enrollment means firms must have comparative data that is up to date and factual when filling out forms. And, he said, companies “must not be afraid to ask for information” beyond the building blocks of names, addresses and emails. Additionally, Barnhardt advised companies to pull in social media profiles, or at least use them as a piece of the enrollment puzzle.

The ideal, he added, is to form a “triangle of trust” with three connected data points that must be in place before letting people transact. A customer’s name is one of the corners of the triangle, with others dependent on whether individuals are active on social media, if emails and mobile phones tie back to their consumer histories or if any of that data can coincide with places where they have worked, among other options.

“If someone doesn’t have some form of social media, it is a problem,” Barnhardt said, and other data points should help verify an individual based on his or her digital footprint.

Single-point solutions and model scores are one-dimensional and can only be effective if they have the right data, he noted.

“GIACT uses non-traditional data, and we continue to evolve this,” he told Webster. Among newer initiatives is facial recognition, whereby photos are matched against databases to ascertain identity.

An example of the next version of verification, according to Barnhardt, is a combination process in which one takes a picture of a driver’s license, then compares it to a picture of one’s face. Similar processes are already being used in the U.K. and have proven to be effective.

“You will have to triangulate multiple sources, because there is so much data out there,” he said.