LOGO-PNG

The downside of AI-based AML technologies

With technology advancing rapidly, banks and financial institutions are increasingly adopting AI and ML-based solutions for handling various aspects of AML compliance, including sanction screening, fraud detection, transaction monitoring, KYC onboarding, perpetual KYC, and other processes mandated by regulations. At first glance, AI-based solutions appear to be the most efficient and logical choice for AML compliance in banks and other regulated entities. They offer a swift and seemingly reliable automated process with minimal human involvement. However, a closer examination of the true essence of AI and the potential consequences of integrating these emerging tools into the financial and regulatory landscape exposes some worrisome traits.

Today I’ll concentrate on just one of the problematic issues associated with introducing AI into AML compliance processes.

Traceability, transparency, and explainability

AI technology, being inherently intelligent, processes new incoming data using a combination of algorithms embedded in the system by developers and new rules developed by the system’s own “intelligence.” As a result, the system operates under rules that users do not manage or regulate, transforming it into an opaque “black box” with internal logic and decision-making processes that lack transparency and traceability. Consequently, compliance personnel or any other system’s users struggle to explain why the system makes certain decisions, flags specific individuals or entities, or clears others.

It becomes increasingly difficult, if not impossible, to trace the source of a decision. As a result, the regulators’ requirement for transparency and explainability is unmet. The system makes decisions and provides answers, yet no human soul can confidently explain the reasons behind these decisions or answers.

Furthermore, regulators need to assess the performance and accuracy of technological aids (systems) used for AML compliance processes, such as screening and monitoring. This evaluation, known as model validation, necessitates, among other things, understanding and explaining the logic behind each decision related to flagging or clearing individuals or entities. The goal is to ensure the following: no threats are overlooked (no false negatives), no individuals or legal entities are unjustifiably affected by the system’s defects and flaws (no unexplained false positives), and the system is free from bias and complies with anti-discrimination laws.

What can be done

One of the ways to minimize the ‘black-box’ impact is by setting an extremely low fuzzy logic threshold. This would cause the system to alert individuals or entities even in cases of the slightest doubt. However, this approach will inevitably lead to an excessive number of alerts (mostly false alerts), which would then require manual resolution, ultimately increasing the workload and leading to potential mistakes (human factor). As a result, the advantages of using such a sophisticated AI model would become pointless. Besides, this solution does not eliminate the ‘black-box’ phenomenon, but just makes questionable cases explainable by the process of manual resolution.

Another solution is to implement an additional AI model that would verify and validate the decisions made by the original AI screening tool, offering dedicated explanations for each decision (such systems start emerging on the market). However, this approach may seem somewhat impractical — validating one AI model with another AI model. Apart from the evident absurdity, one must consider that such a solution leads to significantly higher expenses. In addition to the AML compliance system(s) – sanctions screening, transactions monitoring, onboarding, ongoing, etc., the regulated entity would need to incorporate an additional AI validation system, along with the necessary compliance department personnel to handle alerted cases.

Transparent models

Using models and algorithms based on mathematics can offer a viable solution that goes beyond relying solely on AI. Mathematical models are more interpretable and provide a clear explanation of how they arrived at a particular decision. This interpretability allows human analysts to understand why a transaction was flagged or cleared, enabling them to validate and explain the screening outcomes and making it easier to audit and justify decisions, if necessary.

Fincom's solution within the context of traceability and explainability

Fincom’s system is built on mathematical, phonetic, and linguistic algorithms, creating a clear audit trail. Each decision path is traceable and explainable.

The system provides consistent results, minimizing the risk of unexpected fluctuations or false positives/negatives that can accompany changes in AI/ML training data or internal system-initiated algorithmic changes.

Fincom’s solution offers a clear explanation of how it arrives at specific decisions. This allows human analysts, auditors, and regulators to understand why a transaction was or was not flagged, enabling them to explain and validate the screening outcomes, when required. I.e., the accountability for decision-making lies with explicitly defined processes rather than complex, black-boxed AI algorithms, making it possible to justify and defend decisions, if necessary.

Latest Blog Posts
News, Blog
February 24, 2024
Here are answers to some questions that have recently been raised by regulated entities with regards to OFAC compliance requirements....
News, Blog
February 21, 2024
Historically, sanction lists contained names transliterated into Latin characters. Recently the common practice took a turn, and currently most sanction...
News, Blog
February 7, 2024
The world-leading service & community F6S ranks Fincom #1 among Israel’s Top 78 Government/Fintech Startups 2024 Fincom is proud to be...
Blog, News
December 25, 2023
2023 is coming to an end, and Fincom is summing up the results, while looking forward to the new year...

Thank you for your interest!
Please leave your details