The Role of Financial Regulatory Agencies in Supervising Algorithmic Fairness in Transaction Monitoring

Posted on Mar 28, 2025 by By Frans van Bruggen, Senior Policy Officer FinTech & Artificial Intelligence, De Nederlandsche Bank (DNB) and PhD candidate at Utrecht University, and Robert Schmitz, graduate programme trainee at De Nederlandsche Bank (DNB), currently staffed at Supervision Policy — Technology and Strategy

Frans van Bruggen and Robert Schmitz

The views expressed in this article are those of the authors and should not be attributed to De Nederlandsche Bank (Dutch Central Bank).

Introduction

Across Europe, financial institutions increasingly rely on machine learning (ML) models for transaction monitoring (TM) to detect illicit activities such as money laundering and terrorism financing.[1] These AI-driven systems offer significant benefits, including increased efficiency, scalability, and enhanced model performance.[2] They help financial institutions reduce compliance costs, minimize societal impact through better-targeted investigations, and enable regulators to focus their supervisory efforts more effectively. However, the use of complex ML models raises concerns about algorithmic fairness.[3]

European financial regulators are increasingly aware of the risks these systems pose, particularly in terms of discriminatory practices, systemic biases, and financial exclusion.[4],[5] The challenge lies in ensuring that systems do not reinforce existing inequalities or create new forms of bias.

This article explores the key issues regulators face in supervising algorithmic fairness in TM. First, algorithmic fairness is a complex concept with multiple mathematical definitions, and different regulatory bodies may interpret fairness differently based on their jurisdiction and priorities. This article argues that financial regulators should focus on procedural oversight rather than defining fairness norms themselves.

Second, data ethics can exacerbate this challenge, particularly when selecting which data points to use in AI models. While certain variables should be excluded to prevent direct discrimination, even seemingly neutral data can lead to proxy discrimination, which presents a trade-off between fairness and model performance. This means that regulators need to consider tradeoffs between (indirect) fairness and effectiveness of TM-models.

Third, detecting fairness violations is not always straightforward, as it requires access to sensitive personal data — data that financial institutions are often legally restricted from collecting due to privacy regulations or choose not to store due to ethical concerns. This article examines how European financial regulators address these challenges, enforce compliance, and promote fairness in AI-driven TM.

Societal Purpose and Pitfalls of TM

TM is an important process conducted by financial institutions aimed at detecting and preventing money laundering and terrorism financing. By scrutinizing transactions and identifying suspicious activities, TM helps safeguard the integrity of the financial system. In the Netherlands, financial supervisor De Nederlandsche Bank (DNB) estimates that approximately €13 billion is laundered annually in the Dutch financial system.[6] While an efficient Dutch digital infrastructure and well-developed financial sector support genuine economic activity, they also attract ...



Download your sample issue here!

No part of Central Bank Payments News may be reproduced, copied, republished, or distributed in any form or by any means, in whole or in part, without the express and prior written permission of the publisher, Currency Research Malta, Ltd.