Discriminated by AML/CFT

Our world is a world of ever-evolving technologies and innovations. It is also a world in pursuit of social equality and inclusion. It is a world of hope. It is also a world facing threats. The threats posed by terrorism, new weapons, and drugs…The technologies don’t care which side they serve. The level of sophistication and complication rises on both sides as well. 

As threats become more sophisticated, so does the need for new defenses. Although financial institutions are trying to comply with all AML and CFT regulations, they need to consider how this compliance is affecting them and their customers. While these defenses target money laundering and terrorism financing, they also cause substantial collateral damage. 

In addition to implementing their organizational and operational AML/CFT regulations as per FinCen regulations (documented by the AML Act of 2020), FI’s should also consider and mitigate the discrimination effect caused by enforcing the said regulations.

Meanwhile, new regulations are being introduced – those that pertain to the anti-discrimination and inclusion issues. 

The issue of anti-discrimination regulations reinforcement has now risen to the top of the list of financial institutions’ priorities.

 

However, let’s consider how such discrimination might occur. 

Every FI, including banks, exchange houses and others, uses various technologies to “filter” all the transactions. Millions of transactions are screened by these technologies to detect those that may (may!) be illegal. One of the most common ways to prevent fraud is to screen names and flag those who are on sanctions lists. Despite this, no technology is perfect, and both rule-based and AI-based screening methods result in a large number of False Positives. But what does it mean?

First and foremost, it means that thousands, if not millions, of law-abiding citizens are being discriminated against every day. They lose their credit, their bank accounts are closed, they cannot board a plane… but the systems continue to “flag” them, since national security and fighting terrorism are of greater importance than individual rights. This is on a personal level. In the institutional context, these financial institutions’ reputations are being damaged, they are constantly faced with lawsuits and, therefore, suffer substantial losses.

Due to the reputational and financial damage, the financial institutions have begun to hire more and more compliance and anti-discrimination staff, who manually go through all the flagged names and try to minimize the damage by manual double-checking. There are of course also considerable expenses involved, and the final result is not error-free, since there is always some room for human error and bias. In addition, the method of manual screening is extremely slow, meaning that it can take days or even weeks for a transaction to be approved or a bank account to be opened. 

Therefore, the answer lies with technology. But is technology impartial and unbiased? 

Latest Blog Posts
Blog
January 13, 2022
Now let us dive deeper and see how this “AI bias” (for elaboration on the term, please see Part I...
Blog
January 13, 2022
First of all, what is “algorithmic bias? In her article, “Reconciling legal and technical approaches to algorithmic bias”, Alice Xiang...
Blog
December 30, 2021
The FinCen regulations based on the Anti-Money Laundering Act (AMLA) of 2020 contain a number of updated AML obligations, requirements,...

Thank you for your interest!
Please leave your details

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.