5 minute read

How do you know if a fraudster is human or AI?

Tuesday, November 14, 2023

5 Minute Read

How many times have we heard the following: “The only constant is change”? Fraud schemes are constantly changing. AI development is in constant evolution. It’s no wonder card issuers find it increasingly more difficult to detect what is behind all the fraud in their portfolios.

While technological advancements—such as biometrics and facial recognition—help to strengthen security, it’s still difficult to pinpoint an individual person causing the fraud when bots and AI can generate fraud that looks like a genuine transaction. A recent survey by PYMNTS reveals more about this sobering reality, showing that 58% of financial institutions (FIs) noticed increasing sophistication in the financial crimes they experienced.1

One question that’s becoming more common among FIs when discussing fraud: Is this fraud human- or AI-based?

And on the flip side of that issue, what’s more important to your fraud strategy when identifying fraudsters and protecting your organization: the actual, human agents that determine, set and analyze your fraud rules, or the AI and machine learning that help detect fraud activity? Or is it more important how they work together to identify and mark fraud that is then fed into the appropriate models based on your portfolio’s needs and the fraud tools you have in place?

Combining human and artificial intelligence

The preferred solution should be a combination of both human and AI elements. So let’s quickly examine both in a little more detail.

High cost of manual analysis

First, let’s consider the human perspective. Fraud analysts are engaged to draw conclusions about fraud patterns and genuine behaviors. Their insights can then be used to create screening rules. This manual review process can be costly, time-consuming and often tends to generate a high level of false positives, which in turn may have a negative impact on the cardholder experience. For organizations with a lack of confidence in automated solutions, fraud agent resources can account for a substantial amount of the overall fraud management budget.

Improvement with machine learning

Enter AI, or in this case, machine learning. Machine learning technology is capable of automating the manual review and rule writing process. It can convert the data intensive aspects of the job into a workable format presenting actions to an organization’s decision makers. The more data that is fed into the machine learning model, the better it can distinguish between fraudulent and genuine activity, accentuating the fraud detection process and generating better predictions—and fewer false positives.

Better together

Machine learning can be used effectively to detect actions in layers to single out potentially fraudulent behavior, but models must be consistently fed data with proper fraud marking. That’s why machine learning is only as good as the fraud analysts behind it. Even the most advanced technology cannot replace the expertise and judgment it takes to effectively filter and process data and evaluate the meaning of the risk score.

Striking the right balance between fraud analysts and machine learning tools can be the key to an organization's success in fighting fraud. And there's not a one size fits all answer. Each organization has different needs and goals that have to be assessed accurately in order for them to succeed. - Kasey Boyd, Senior Director & Head of Fraud, TSYS Issuing Solutions

Striking the right balance between fraud analysts and machine learning tools can be the key to an organization's success in fighting fraud. And there's not a one size fits all answer. Each organization has different needs and goals that have to be assessed accurately in order for them to succeed. - Kasey Boyd, Senior Director & Head of Fraud, TSYS Issuing Solutions

The common ingredient? Data

No matter how you decide to structure your fraud fighting teams and tools, the basic ingredient needed to drive everything is data. So let’s look at the importance of data, and how you could use it to feed your machine learning models.

Data types and data handling

Issuer processors are uniquely positioned at the intersection of customer and account data, schemes data, bureau data, consortium and customized scores and institutional data. They also have the ability to bring in third-party data from partners and vendors—as well as external client files—to determine cardholder behavior.

Once you have the right data identified and sequenced, it can then be fed into models. Some models might require mature data, while some will lean on real-time data. Once the models are established and running, they can produce different views which serve as different indicators of fraud—all depending on the data they’re being fed.

Different data points are used to drive to very specific outcomes. Take, for example, velocity. In a velocity check, an issuer will be able to identify a bot (or AI fraudster) based on a front-end or back-end indicator. In this case, the data feed doesn’t point to a single model or score, and may be more indicative of fraud than scoring. Or consider data points related to data security: purging, transit, risk, encryption, tokenization and data properties, to prevent data breaches.

Account, POS entry mode, Card present, Terminal ID, Merchant name, Merchant ID, Merchant category code, Ecommerce, Device, Velocity

Data weighting

Depending on your card portfolio and the type of data you have access to, you have many models to choose from: consortium, schemes, regional and custom scores. Issuers utilizing multiple fraud scores may take the median between values relying on multiple scores. Certain models may be used for a baseline. For example, one model may target high values for easy fraud capture wins on large dollar transactions. Or, an issuer may place weight on the data for key indicators resulting in a better predictor of fraud. Models may also be weighted towards different behavior. For instance, some might be weighted more toward non-monetary account behavior, while others are more adept at spotting anomalies versus genuine behavior due to regular online recalibration. It’s in these key differences between models from the layering and weighting effects that yield discovery of new patterns of fraud.

Fraud marking

How data is fed and how data is handled are important, but effective fraud strategies also depend on optimization of the analytics—in other words, tracking the right data at the right time. Consistently taking action on the data is key. FIs tend to only report and apply fraud markings consistently or use the same data consistently 70% to 80% of the time. As a best practice, TSYS recommends issuers mark as much fraud as possible— ideally this would be 100% of the time. Other recommendations for consistency include marking fraud across approved transactions, declined transactions and from statements.

Consistency is the key. Consistently expanding issuer data sets by training, testing and improving what is fed to machine learning-and by consistently marking fraud, enables machine learning to locate patterns and identify fraud at a higher success rate. -Kasey Boyd, Senior Director & Head of Fraud, TSYS Issuing Solutions

Consistency is the key. Consistently expanding issuer data sets by training, testing and improving what is fed to machine learning-and by consistently marking fraud, enables machine learning to locate patterns and identify fraud at a higher success rate. -Kasey Boyd, Senior Director & Head of Fraud, TSYS Issuing Solutions

Timeliness also plays a role in the quality of fraud markings. Some FIs may wait for declarations and hold for up to 30 days before marking fraud, but what TSYS has discovered works best for issuers is an average marking period within three days. Three days, compared with a 30-day window, can create a notable difference in the quality of your fraud marking. The longer fraud goes unnoticed, or unmarked, the more damage it can do to your bottom line.

A case in point

Here’s an example of a client that faced persistently high false positive rates throughout 2021. Once they placed a higher priority on consistently marking fraud starting in April 2022, they were able to significantly reduce their false positive rate from 12.9% to 1.7% over a six-month period ending in September 2022, at a fraud catch rate of 40%.2 This illustration shows that placing a renewed emphasis on fraud marking can produce much lower false positive rates, which minimizes disruption for cardholders and can lead to an increase in transactions.

Graph: False Positive rate-volume (Fraud catch=40%Graph: False Positive rate-volume (Fraud catch=40%

Finding the right mix, and taking action

Whether you’re fighting AI or human-generated fraud, combining both the human element and the power of machine learning is an effective approach. Fraud analysts need to be consistent with the data that’s being fed to your models, and the marking of that data. That provides your machine learning capabilities with stability, which should lead to fewer false positives and fewer declined transactions, which in turn, could lead to lower operational overhead and increased cardholder satisfaction.

  1. PYMNTS.com “State of Fraud and Financial Crime in the US report” with Featurespace, September 2022
  2. Proprietary TSYS data

Latest articles

Never Miss an Insight

Get the latest from TSYS a Global Payments Company