The Changing Face of Fraud

Do you know how many fraudulent merchants you have within your portfolio at any given time?

Fraud metrics within an acquirer portfolio tend to be hard won as the knowledge is gained after the financial loss has occurred. The industry attempts to mitigate risk via ongoing transaction monitoring - – excessive chargebacks greater than .9% dispute ratio? Transaction volume exceeding 150% of expected?

Quantifying risk at onboarding is even harder as the fraud is still hypothetical at this stage; onboarding often becomes focused on performance metrics such as “90% auto-approval of merchants” as the definition of success.  Understanding, however, what your merchant base looks like from a fraud perspective is key to setting your onboarding and automation appetite.

The payments industry has always been targeted by fraudulent actors, who advance in pace with the technology designed to catch them.  As the payments industry is increasingly digital an issuing bank or merchant acquirer may never come face-to-face with their customers - increasing the challenges in safeguarding the ecosystem from fraud.  Merchant acquirers, in addition to screening potential merchants for AML and OFAC, must determine the likelihood of suffering financial harm through ineffective or malicious actions from the merchant.  Factor in the desire for acquirers to onboard merchants instantaneously and at scale, and fraud becomes a significant risk.

RPY Innovations has helped many ISOs, ISVs, sponsor banks, acquirers and Payment Facilitators evolve their merchant onboarding and monitoring solutions, and will deep dive into one particular implementation.

The Project

RPY Innovations worked with a large Payment Facilitator on the implementation of an AI rules-based merchant underwriting and onboarding solution. As the PayFac was horizontal in nature (meaning servicing many different industries or verticals) there was no clear definition of what a “good” merchant looked like as expected merchant processing volume varies by vertical.

  Our work included a few key data sources:

-          The metadata for thousands of merchants’ underwriting was reviewed. The metadata was the results of checks such as “age of business” or “count of criminal record findings” related to the beneficial owner and business entity, without any of the personally identifying information of the merchant itself.  Each merchant had over 20 metadata attributes.

-          The PayFac had segregated the list by merchants who had, and who hadn’t, caused the PF to suffer financial harm in the past, through suspected merchant fraud.

RPY’s goal was to create an initial set of rules for the AI engine to help identify legitimate merchants while diverting suspicious merchant applications for human review.

Findings

While the merchants who had caused financial harm represented a small demographic within the merchant base, the scale of the losses suggested intent in the actions – driving the need to determine any commonalities between malicious actors.

The initial results, however, found that fraudulent merchants mirrored legitimate merchants in almost every way – average email length, refund ratios, declines – everything was…. Average.

Suspiciously so.

In the 1950s, the U.S. Air Force decided to increase efficiency by designing the perfect cockpit for their planes. They measured over 4,000 pilots on over 140 data points – height, weight, leg length, down to thumb length – to create the “average” persona that would fit all.

As it turned out – it fit none. While data from the pilots generated the definition of average, no individual pilot was perfectly average. We are defined by our quirks, and no one person is perfectly average in all regards.

While RPY’s initial findings didn’t find commonalities between the fraudulent merchants that didn’t mirror legitimate merchants as well, an interesting pattern did emerge: the more outliers a merchant had (or, said another way, the less average they were), the more trustworthy they were likely to be.

These outliers could be harmless statistics such as an unusually long (or short) email address, an unusually old email address, and most interestingly – attributes considered negative such as excessive chargebacks, declines, or even derogatory findings. Evictions? Speeding tickets? For RPY’s client, these all correlated with less chance of fraud, not more.

Why would these deviations – especially the negative findings – correlate with trustworthiness?

There is only one demographic of merchants that are perfectly average in all regards: synthetic ones. Unlike real people who are tied to their credit history through all of life’s ups and downs, synthetic identities face no hardship, get no speeding tickets: they are the one demographic where being perfect is the default. 

Synthetic Identities

4 million Americans turn 18 every year and become eligible for opening a credit card, in addition to the 1 million new entrants to the credit ecosystem via immigration. On the Canadian side, we see nearly 1.5 million new entrants annually, driven primarily via immigration. Newly issued SSNs/SINs pose a key challenge to credit issuers:  a percentage of the potential client base will not have a credit history. This is often mitigated by pre-paid cards, limited lines of credit, and credit-building products which report to the credit bureaus to help new entrants establish credit history.

Synthetic identities take advantage of this ecosystem by pairing an SSN with stolen or fictitious data to start building– over time they will appear to have an established credit history, potentially even added to utility bills and food delivery apps to appear to have a physical residence. These identities are groomed to be the perfect persona until the fraudster “cashes in” the identity by using it to perform payment fraud. While a persona is typically burned after use, a single synthetic persona can be re-used multiple times in systems lacking deny lists and proper KYC/KYB checks. Synthetic identities are estimated to have accounted for nearly 2.5 billion in payments fraud in 2023.

Outputs

The correlation between “perfectly average” and fraudulent led to several rule suggestions, both in identifying potential synthetic identities, and in considering the count of outliers in the merchant application. A strong review-and-modify program was implemented, to ensure the rules continued to target the most current threat vectors.

Rulesets will inherently vary per acquirer – whether servicing vertical or horizontal client bases, the risk appetite of the acquirer – and continuing pressures to provide instant merchant underwriting and approvals.

Continuously evolving your fraud solutions is key to protecting your business – if you’re interested in a conversation about your fraud and risk monitoring, please reach out to RPY Innovations at hello@rpyin.com  

Citation:

Recent research from Aite Group, sponsored by consumer credit reporting agency TransUnion, found that synthetic identity fraud for unsecured U.S. credit products totaled $1.8 billion in 2020 and will grow to $2.42 billion in 2023.

Previous
Previous

Is “Embedded Finance” a value-add for vertical SaaS companies?

Next
Next

Improving Time to Revenue with Embedded Payments