4 minute read
A data-driven approach to use AI against financial crime
Last year, more than $250,000 was lost to cybercrime every second! That added up to $8 trillion for 2023. And by 2025, it’s projected to be even higher at $10.5 trillion.
From identity theft to credit card fraud, criminals are using generative AI and other emerging technologies to extract customer account information. Is it the bank’s responsibility to keep customers safe? Should customers be accountable when falling for phishing scams? Is more advanced authentication payment technology needed?
The answer to all of these questions is yes. But it’s not that simple.
For card issuers, it can be a constant struggle to stay ahead of evolving fraud tactics and keeping customers’ data safe while limiting friction with transactions because of added security measures.
Think about how people use technology to access and share personal and financial data — such as buying an outfit online by entering a credit card number, name and address. AI can help the customer find the right item using shopping history then quickly and effectively analyze cardholder information during checkout for a seamless online experience that meets their expectations. This personalization can create a secure omnichannel experience that reduces cart abandonment.
Though what if it wasn’t the genuine cardholder making the purchase?
The technology intended to keep data safe and make our lives more efficient is being used to steal online information for financial gain. For example, fraudsters use AI technology — specifically AI bots — to submit online transactions through a combination of primary account numbers, card verification values and expiration dates. This has led to annual fraud losses of $1.1 billion.
Banks want more security measures to strengthen verification of a cardholder’s information and authenticate their identity. But as additional steps are added to a transactional process — such as copying multiple authorization codes from a phone to a computer — it may lead to more friction.
"In fact, payment processing speeds have slowed, frustrating customers and merchants," said Don Riddick, Chief Legal Officer, Featurespace. “(Customers say) I want transactions to be secure, but I don’t want four one-time passwords. We need to use the right AI to remove friction.”
Balancing act: Data security and data sharing
It’s been said that people are the weakest link with data security. People are prone to errors, sometimes repeating the same mistakes without a clear solution.
With AI, banks have a new way to not only enhance data security but to balance it with the need for data sharing. Adaptive and generative AI models, in particular, analyze patterns in data to identify potential fraud activity.
The data part is crucial because the models can continuously learn from new or fresh data that is given to them — such as cardholder purchasing information — to identify anomalies in real time to make decisions. Traditional manual rules-based technology, in comparison, relies on “aged” data. The rules act together and don’t choose a specific behavior to examine. This means the same rule applies to each customer, making it challenging to adapt to ever-challenging fraud patterns and potentially resulting in a large number of false positives.
AI, meanwhile, thrives on change. The technology applies more specific rules that look for certain combinations of cardholder behavior, so it can be tailored to a customer’s actions. The result is that it can identify patterns and adapt to evolving fraud trends for more effective detection and prevention.
An example of this shift occurred during the COVID pandemic. People who had previously only made purchases in person were suddenly buying things online.
“Models that depended exclusively on rules or aged data sets would have deemed this activity suspicious, leading to higher decline rates until the models were refreshed and learned that legitimate buying behaviors had changed,” Black said.
Maybe unsurprising then that in 2021 false declines were estimated to have resulted in merchants losing $16.3 billion.
Would using new technologies have made a difference?
“Clients who deployed new technologies, such as the TSYS Foresight solution, benefitted by the adaptive AI framework and saw higher approval rates,” Black said. “That means, those cardholders encountered less friction and continued to transact on their preferred cards without disruption. Today, the learnings from that application of adaptive behavioral analytics in transaction monitoring are being expanded to also support cardholder authentication and portfolio risk models on a broader scale.”
The human factor: Trust and transparency
With any new technology, there can be uncertainty or resistance to it. People may not trust it, don’t know how to properly use it or are simply afraid to try it out of fear of their data being stolen.
Earning cardholders’ trust is already a challenge for many financial institutions (FIs).
Only 57% of Americans trust FIs to protect their personal information. The same amount would stop doing business with companies that suffered a breach or cyberattack that endangered their data. The U.S. is one of 13 countries where financial companies are distrusted.
From 2022 to 2024, customer trust of FIs’ data-tracking practices declined 30%, with the belief that they only collect necessary personal and behavioral data.
Trust issues with data privacy and security aren't limited to FIs. These are also the two biggest barriers to AI adoption. In fact, privacy concerns are two times more widespread than concerns over job impacts, such as the fear of having one’s job replaced by AI.
So, is privacy the key to building cyber trust? Will the same hold true with AI?
It depends on who you ask, and where you live. Only 34% of Americans trust businesses to use AI effectively to protect against fraud compared with 63% of Brazilians.
“You want the payment process and authorization stream to be invisible, which goes back to the number of interactions. This holds true for AI as it works best where it isn’t seen,” Riddick said. “It really is about finding ways AI can help without interfering.”
As more companies and employees use and are trained on AI tools — currently 51% of businesses use the technology to help with cybersecurity and fraud management — there could be an increased level of comfortability due to how it works and the benefits.
For example, imagine if generative AI detected fraudulent activity on an account, and instantly sent a mobile alert about it. The technology is not only analyzing card data in real time but communicating with a compromised user and the payment platform. Such first-hand experiences may help build trust.
“No matter how good the technology is, how do we use the technology but keep the human (experience)?” said Yogs Jayaprakasam, SVP, Chief Technology and Digital Officer, Deluxe. “Technology alone can’t solve problems.”1
The right approach
From friction to usability, there are questions and expectations with AI to fight fraud while keeping data safe. Answers may not be far away, and in some cases, are already here.
What matters is the approach to these challenges.
“AI holds incredible potential against fraud,” Black said. “It’s important for issuers to work with processing partners to embrace their infrastructure of trust and security to protect and leverage data, especially as cardholder behaviors and challenges continue to change.”
If you're interested in learning how TSYS can help you align your data and fraud fighting strategies within a comprehensive fraud solution, please click here.
1. Fintech South, “Fighting Fraud with AI: Optimizing Your Model Without Exposing Sensitive Data,” August 27
Latest articles
Never Miss an Insight
Get the latest from TSYS a Global Payments Company