Now let us dive deeper and see how this “AI bias” (for elaboration on the term, please see Part I of the article) can lead to discrimination in the real world.
“Current AI systems, such as those used for perception and classification, have different kinds of failure – characterized as rates of false positives and false negatives. They […] can exhibit unwanted bias in operation.” (p. 137, Final Report of the National Security Commission on Artificial Intelligence).
So, how does it happen?
- False Positives: AI and legacy rule-based systems return hundreds sanction lists matching results. The alerts are sent by the system and, in the best case, are queuing for their turn to be manually double-checked by FI’s compliance staff or, in the worst case, are flagged, which means that all these people are denied the services they required due to the “algorithmic bias.” For more details see the Overview of “algorithmic [AI] biases.”
- False Negatives: a misspelled name or a change of a single letter based on different pronunciations of a given name are enough for the system to fail to recognize an individual actually flagged in sanction lists. This paves the way for money laundering and terrorism financing
As a result, financial Institutions incur considerable losses:
- They spend hundreds of thousands of dollars, Euros, and Pounds on compliance and ethics staff that manually goes through all the alerts returned by the system;
- Still, the manual process is awfully slow and clients, whose names triggered the system’s alarm, need to wait days and weeks before getting approval for the services;
- Manual processes are subject to human errors, intentional or unintentional;
- By discriminating their clients, banks and other FI’s become targets for legal claims, so that they need a separate budget item for legal expenses, fines, and penalties associated with discrimination law suits;
- On the other hand, the false negatives that have post factum been categorized as money laundering, subject FI’s to another kind of legal charges, i.e. non-compliance with AML and CFT laws and regulations;
- In both cases (false positives or false negatives) the companies suffer substantial reputational losses. In order to recover from such losses, they need to carry out massive costly marketing campaigns, re-branding, re-structuring…
Fincom.co offers an elegant technological solution that helps Financial Institutions and other entities requiring proper accurate customer identification, save dozens and hundreds of millions of dollars, effectively fight money laundering, and conform to AML regulations as well as to anti-discrimination laws while safeguarding the customers’ privacy.
Fincom.co is a game changer in the sophisticated world of AML compliance. Based on a unique Phonetic Fingerprint technology, it verifies information in 38 languages in real-time, reducing false positives by 90% and, by doing this, considerably mitigating and even eliminating compliance costs associated with AI bias.
Learn more about Fincom.co’s anti-discrimination solutions aimed at fighting AI bias.