Can AI Fix Consumer Onboarding Security Hiccups?

The sharing economy’s business model is straightforward: Individuals list goods and services on curated online platforms that foster pain-free connections to renters or buyers. Such transparency between buyers and sellers allows the economy to quickly grow — but also presents fraud and security challenges.

Fraudsters have flocked to online marketplaces with the same zeal as legitimate consumers, taking full advantage of such platforms’ anonymity and transaction speeds. More sophisticated fraud detection systems force cybercriminals to rely on more subtle attacks, such as building false identities that appear real until money changes hands. Risk detection systems can be fooled by 85 percent to 95 percent of synthetic identities, and fraudulent listings can even impact veteran sharing services like Airbnb.

Fraud’s consequences can cripple marketplaces, bringing disaster to more than their bottom lines. Sharing economy platforms rely on trusted relationships between buyers and sellers, and large-scale fraud can dissolve that trust, so they will need to ensure their verification tools are on security’s cutting edge with AI-focused offerings.

The static verification problem

Sharing companies’ online predispositions mean that common identity markers are also digital. Marketplaces ask potential sellers, renters or buyers to provide email addresses, usernames and other similar details to coordinate business. Recent PYMNTS research found that 71 percent of surveyed platforms use email addresses — the most popular method — to verify users. Phone numbers and email responses are also common.

These identification tools are rapidly becoming obsolete as high-volume breaches expose thousands of consumers’ personally identifiable information (PII), however. Account details stolen from one source can easily be repurposed on another platform, as was the case when Amazon-owned home surveillance and smart camera company Ring’s recent hack leaked more than 3,000 users’ details. Many had used Amazon credentials to access their Ring services, so the breach gave fraudsters a jumping-off point to attack Amazon sharing services, plant false goods or spark other new fraud types.

Sharing economy stakeholders can thus no longer rely on static PII for authentication. Marketplaces will need to analyze less obvious aspects of potential users’ online identities to retain critical security and trust, potentially even using behavioral and historical traits consumers may not even be aware of. Tapping details that complicated requires technologies able to isolate patterns invisible to humans.

AI’s role in sharing verification

Marketplaces are already familiar with how AI and advanced learning tools provide deeper insights about customers, enabling personalized services and greater differentiation from competitors. Using such insights to prove customers’ legitimacy could also add the extra layer of protection necessary to ward against more subtle fraud forms. AI and machine learning (ML) are employed on the back end, meaning fraudsters looking to generate false listings using illegitimate identities must get past these invisible security measures as well as those employed during account creation.

AI could profoundly impact sharing marketplaces’ fraud detection, especially as fraudsters develop their own increasingly sophisticated technologies. Onboarding with online platforms to serve more complex fraud schemes is becoming popular, increasing 27.8 percent worldwide during 2019 despite anti-fraud innovation efforts.

Companies using these tools alongside other authentication measures may be able to defeat schemes like false listings or synthetic identity fraud, as fraudsters take between 12 months and 18 months to create synthetic identities that fool detection services reliant on static customer PII. AI can bolster an identity’s authenticity without bad actors’ knowledge, however, all while keeping legitimate users’ login and registration processes quick and seamless. The technology can scan for anomalies that may tell platforms if identities were only recently created, for example, and is especially effective when paired with tougher verification tools like biometrics or two-factor authentication (2FA) at checkout.

Fraudsters are unlikely to stop innovating, though. They are largely familiar with how sharing platforms authenticate new and existing users and have access to their own innovations to pit against anti-fraud tools. Platforms will thus need to continually adapt their verification methods to stop those aiming to slip through the cracks.