The AML Act (AMLA) of 2020 stipulates that, along with implementing FinCEN’s organizational and operational AML/CFT regulations, financial institutions shall also consider and minimize the discriminatory effect that results from enforcing the said regulations.
Accordingly, the Equal Credit Opportunity Act “makes it illegal for a creditor to discriminate in any aspect of a credit transaction based on certain characteristics.” It is illegal to refuse credit or close an account on the basis of race, religion, national origin, etc.
Financial institutions are now prioritizing the issue of strengthening anti-discrimination regulations.
Recently more and more rule-based and AI-based technological solutions are being developed and used by banks, exchange houses, and other financial institutions to ensure compliance with AML/CFT regulations. However, they also produce a large number of false positive entries, do not identify cross-language matches in a multilingual environment, and have difficulties detecting name variations, especially in structured names.
Alice Xiang, Sony group’s head of A.I. ethics office, indicates that there has been no such thing as a “perfectly unbiased algorithm” in AML/CFT systems… at least until today.
Individual biases and internalized prejudices creeping into AI and ML algorithms may lead to built-in bias and discriminatory “nature” of the system. Such systems only serve a portion of the population, while creating discrimination and supporting human biased attitudes with the most seemingly unbiased thing on the planet – machines.
Socially responsible corporations now employ whole teams of ethics specialists with the aim of preventing (among other things) social, racial, religious or any other kind of bias from infiltrating their AI algorithms.
Alice Xiang continues saying that “if you have a self-driving car, then you need to think about being able to detect pedestrians and ensure that you can detect all sorts of pedestrians and not just people that are represented dominantly in your training or test set.”
This is an interesting parallel, because this is precisely the issue solved by Fincom’s proprietary algorithm based on phonetic fingerprints.
If you view the pedestrians (all pedestrians), not as a unit, but as a collection of universal features (molecules?), combined in an infinite number of variations, each of which creates a unique pedestrian, then there can be neither bias, nor consequent discrimination.
FinCom’s technological solution does exactly this: it “sees” an entity (individual or business) as a set of phonemes, the combination of which, represented by a mathematical code, is being scanned and monitored in real-time, resulting in accurate alerts, once such a combination matches an identical combination from any sanction list or other database.
The technology for entity resolutions developed and patented by FinCom, is at the core of the engine. It uses 44 advanced algorithms: pure mathematics – transparent, explainable, neutral, and impartial.
By using this technology, millions of dollars may be saved in fines and penalties. It helps to:
- Minimize expenses associated with supporting numerous compliance units and departments, staffed by thousands of employees.
- Avoid reputational damage and consequent loss of clients and profits,
- Ensure compliance with AML/CFT regulations and anti-discrimination requirements.
- Avert massive financial losses resulting from fines and penalties,
- Effectively fight and win the war against financial crime, money laundering, and terrorism financing.
Learn more about FinCom’s AML/CFT and anti-discrimination solutions.