Closing the Context Gaps: How to Fight Crime While Protecting Privacy in Payment Systems
Posted on Oct 23, 2025 by David Sutton, Senior Director – AI Research and Innovation, Visa
Fraud is rising at an alarming rate. According to the FBI’s Internet Crime Report, losses to fraud in the US alone rose 33% to $16.6 billion in 2024, with an estimated $31,600 stolen every minute. Globally, the United Nations Office on Drugs and Crime (UNODC) estimates that between 2% and 5% of global GDP – up to $2 trillion – is laundered each year.[1]
The proceeds of fraud and money laundering rarely sit idle. They are a lifeline for the world’s most damaging criminal activities: terrorism, civil wars, human trafficking, and the narcotics trade. Every fraudulent payment or laundered sum risks fuelling these threats.
Payment System Health and Confidence
One of the most important indicators of a payment system’s health is the degree to which it is free from fraud and criminal misuse.
Fraud and scams are some of the most visible problems affecting consumers, but they are just one part of a bigger challenge. Money laundering, sanctions evasion, and terrorist financing undermine the integrity of and customers’ trust in the entire system.
This issue is central to debates around cryptocurrencies, which have been noted for their role in enabling anonymous transfers that are difficult to track, and is also a motivating factor behind the development of Central Bank Digital Currencies (CBDCs). A robust payment system must strike the right balance between security and privacy.
The Challenge: Context Gaps in Payment Systems
A key vulnerability that criminals exploit is the context gap in payment systems.
In a typical transaction, there are two parties – a sender and a receiver – and each party’s bank holds important contextual information about its own customer. This might include:
- Identity (name, age, address)
- Contact information
- Location and travel habits
- Line of business or occupation
- History of financial activity (both long-term and recent)
- Connections to other accounts
- Flags for suspicious activity
These pieces of information, when brought together, can be highly predictive in detecting criminal intent.
However, each bank only sees its own half of the story:
- A sender’s bank may know their customer has a moderately suspicious history but know nothing about the recipient.
- A receiver’s bank may know their customer has links to other risky accounts but knows nothing about the sender’s behaviour.
Sharing this information would close the context gap, providing a more complete risk assessment and flagging suspicious activity based on the bank’s defined threshold for action on fraud. But viewed separately, each bank sees an incomplete risk story that doesn’t always raise a flag.
However, sharing all of this information between banks is currently a challenge due to data protection laws such as the EU’s GDPR, and could potentially undermine consumer trust.
A New Approach: Privacy-Enhancing Technologies (PETs)
Recent advances in artificial intelligence (AI) offer a way forward: the use of privacy-enhancing technologies (PETs) to enable collaborative decision-making without disclosing sensitive data.
One promising approach combines two techniques:
1. Vertical Federated Learning (VFL)
- Each bank keeps its sensitive data locally.
- A shared AI model is trained collaboratively across banks, combining risk intensity signals from each bank, without moving the underlying sensitive data.
2. Local Differential Privacy (LDP)
- The system adds carefully calibrated “noise” to the risk intensity signals before they leave the bank’s secure environment.
- This ensures that, even if intercepted, the signals cannot be reverse-engineered to reveal any sensitive information.

Figure 1: Vertical Federated Learning (VFL), mixed with Local Differential Privacy (LDP), keeps sensitive personal data safely on each bank’s system but allows privacy-preserved risk intensity signals to be combined into a shared risk score.
VFL and LDP together make it possible to build a decentralised, real-time, collaborative machine learning model that can detect suspicious transactions by combining contextual signals from both sides of a payment – without exposing private customer data to other banks or external parties.
The PETs Prize Challenge
The potential of this technology was validated in the US-UK PETs Prize Challenge, an international competition organised by the US and UK Governments, in collaboration with Swift, to develop privacy-preserving solutions for detecting financial crime in cross-border payments.
Most of the winning solutions (including the author’s) relied on the combination of vertical federated learning and local differential privacy.
To ensure these systems were truly private, the competition set red teams against each system, simulating the attacks of sophisticated threat actors. These included:
- Highly advanced AI systems employing powerful statistical methods
- “Colluding” banks trying to piece together data from intercepted communications
The results were clear: context gaps in payment systems can be closed without compromising the sensitive information in the underlying private data.

Figure 2: The noise level in Local Differential Privacy enables us to tune the trade-off between accuracy improvement and privacy protection guarantees. This is measured by a red-team trying to reverse engineer the sensitive data using sophisticated AI-powered inversion attacks. In this example from the PETs Prize Challenge Competition, Featurespace’s PETs solution could be tuned to achieve >80% of the benefit of full data sharing with zero privacy leakage in testing environments.[2]
Looking Ahead: Building Privacy into Payment Systems
The technology to enable collaborative, privacy-preserving decision-making already exists. By adopting advanced privacy-enhancing technologies like vertical federated learning and local differential privacy, banks and central banks can create systems that see the whole picture of a transaction without putting customer data at risk.
The next step is to make it a design feature of new payment systems, including future CBDCs.
This will require:
- Banks to be willing to collaborate, even with competitors
- Regulators to create frameworks that encourage and enable secure data collaboration
- Central banks to consider PETs as part of their digital infrastructure planning
The stakes are high. Payment systems that can detect and prevent fraud without undermining privacy will strengthen public trust, safeguard the economy, and cut off funding to some of the world’s most dangerous criminal enterprises.
The technology is ready. The question now is whether the financial sector has the will to work together to deploy it.
About Featurespace
Featurespace, a Visa Solution, is a global, AI-native transaction monitoring company that helps to prevent fraud and financial crime. Using artificial intelligence, it analyses data in real time to identify and stop existing and new forms of fraud and financial crime.
Delivering on its mission to make the world a safer place to transact, Featurespace works with many of the world’s largest banks and financial institutions, protecting 500 million consumers globally and safely processing over 100 billion payment events each year.
Over 100,000 businesses put their trust in Featurespace’s technology including NatWest, TSYS, Worldpay, Danske Bank, Akbank and Edenred. Founded in 2008, and headquartered in Cambridge, UK, Featurespace has more than 400 team members, operating globally from six locations. Learn more at featurespace.com.
[1] Source: https://www.fbi.gov/contact-us/field-offices/atlanta/news/the-fbi-released-its-internet-crime-report-2024
[2] Results may vary.
Download your sample issue here!
No part of Central Bank Payments News may be reproduced, copied, republished, or distributed in any form or by any means, in whole or in part, without the express and prior written permission of the publisher, Currency Research Malta, Ltd.
