Keeping On-Demand Professional Services Fraud-Free

Catching fraudsters can feel like a game of whack-a-mole for marketplaces that must carefully vet both buyers and sellers. The key, says Oisin Hanrahan, CEO of on-demand professional marketplace Handy, is using artificial intelligence (AI) and machine learning tools to predict the probability of bad outcomes before they happen. He explains how in the new Fraud Decisioning Playbook.

Digital marketplaces rely on trust to thrive. Customers who do not feel they can safely transact on such platforms are highly unlikely to return to them, putting tremendous pressure on merchants to provide experiences that are both seamless and secure.

The challenge is doubled for two-sided marketplaces that connect customers and service providers. These platforms must ensure that all parties are trustworthy, putting them in precarious positions as they work to build their user bases without alienating either side with their anti-fraud measures. 

One such marketplace is on-demand professional services platform Handy, which connects customers with workers who can perform a multitude of household tasks, such as cleaning, furniture assembly and appliance installation. CEO Oisin Hanrahan recently spoke to PYMNTS about how the company combines artificial intelligence (AI) and machine learning (ML) solutions and human insights to ensure its anti-fraud efforts do not interfere with legitimate users’ experiences. 

“There are ever more ways in which funds are flowing digitally, and that creates an opportunity for fraudsters to behave in ways that are detrimental to legitimate customers,” he said. “I hope … that we can continue to drive toward this world where we can identify individual actors with very high levels of certainty.” 

Walking The Security-Seamlessness Line

Vigilance against fraudsters begins with the onboarding process, according to Hanrahan. Customers and professionals who use Handy’s platform must provide data that it can use to enhance interactions and build trust upfront. 

“When you’re onboarded to the platform, we ask you a number of questions and do a pretty deep regression analysis to figure out how your answers to questions are likely to lead to a great experience,” he said. 

The onboarding process uses AI and ML to review data like Social Security numbers and facial recognition technology to compare selfies with government-issued forms of identification. The platform also runs customers’ background checks and processes credit cards through a predictive analytics system to gauge fraud risks. Handy requires two-factor authentication (2FA) methods such as text messages or email responses to verify certain activities, including sudden changes to credit card numbers, home addresses or — in the case of professionals — the bank accounts in which their funds are deposited. 

These measures are necessary to protect both sides of Handy’s platform, but Hanrahan said the challenge is in making sure its anti-fraud efforts are not cumbersome for legitimate consumers looking to hire professionals. 

“If we were to go and insist that every customer that comes to our site would do the same photo verification that a pro would do, that would obviously be too extreme,” he said. “It’s a matter of figuring out how you put in place the right balance to maintain a seamless customer experience.” 

Getting Preemptive About Fraud

Handy protects its platform by underwriting transactions, which ensures customers are not liable for fraudulent payments made with stolen credit cards and that professionals are paid for their jobs. The latter are also guarded against false reviews because the platform enables only users who have hired and paid for services to share their experiences online. 

Handy’s role as an underwriter requires it to determine how fraud affects its bottom line and how much activity should trigger a response. It also relies on a team of contractors to analyze the platform for patterns and uncover vulnerabilities. 

“The last part is to say, ‘How can we preemptively spot these things before they actually come to fruition?’” he said. “To do that, we engage with folks to really dig in on the platform and try and identify vulnerabilities in advance.” 

The company’s solutions combine several data fields to determine if new fraud patterns are emerging. These solutions use AI and ML tools to review individuals’ behaviors, identify trends over time and engage with its team of data scientists to understand important correlations. Hanrahan noted that advanced learning tools help new users onboard smoothly, but human beings will still be necessary to provide support and ensure seamless user experiences. 

“I think we’re always going to need humans to figure out certain parts of the decision tree and certain parts of where to draw the boundary between putting friction in the customer experience, what that friction looks like [and] what’s the most creative way to put that friction in place to remove fraud,” he said. “I think it’s a balance of both in the long term.” 

Digital marketplaces that enable users to electronically exchange funds require several layers of protection. AI and ML tools can help quickly review trends, but human insights are necessary to make solutions more effective at catching fraudsters and ensuring seamless experiences for legitimate users. These security layers are a must in a market that relies on trust, and ensure that home professionals and consumers can transact with peace of mind.