Can you trust your AI?

We've developed 7 questions that we think you should be asking your own solution vendor to better understand how they manage risk and ensure their AI-based solutions can be trusted.

One third of financial institutions are now using AI in their Anti-Money Laundering systems, and another third are experimenting with pilot projects1.

And yet ensuring we can trust decisions that AI makes is a major challenge for AML teams who are looking to deliver effective and efficient new approaches.

A risk-based approach is mandated to financial institutions, but AI presents new challenges for managing risk. We need to know that it can be trusted to produce the right decisions and know how to manage the risk that it won’t.

We provide automated AML investigation solutions for Tier 1 level global banking and the question of trust is increasingly raised. Our own experiences have helped us to develop the top questions that we think you should be asking your own AI solution vendor to better understand how they manage risk and build trust.

the 7 most important questions to ask your AI SOLUTION vendor

1. What are the major risks with your AI solutions?

Issue: All processes and tools introduce some risk, whether AI or traditionally engineered. A risk-based approach requires a transparent process of identifying, prioritising, and mitigating those risks, rather than a culture of denial.

The wrong answer: Your vendor claims that there are no, or low risks with their solution.

The right answer: Your vendor should have documented and prioritised the major risks with their solution, with mitigations in place or planned.


2. How does your AI solution perform on high-risk/high-impact classes?

Issue: Headline or aggregate performance figures can hide the most important risks. Money laundering and terrorist financing activity is rare, so a solution that declared every transaction safe will be right most of the time, but wrong when it matters most.

The wrong answer: Your vendor claims very high overall accuracy numbers.

The right answer: Your vendor identifies the highest risk or highest impact cases – such as high-risk customers, or alerts, or geographies – and measures decision accuracy (including false positives and false negatives) on each.


3. How do you ensure the quality of the training data used for your AI solution?

Issue: Garbage in, garbage out. If an AI or ML solution does not have access to high quality ‘ground truth’ data, then it cannot make accurate decisions and, worse, we cannot judge the accuracy of those decisions.

The wrong answer: The quality of training data is taken for granted.

The right answer: Ground truth is determined by trusted subject matter experts, and the consensus between them is monitored and managed.


4. How do you detect if your AI based decisions are biased?

Issue: AI solutions can introduce unwitting and discriminatory biases into AML processes, as spurious correlations between disadvantaged groups and higer-risk patterns of activity are mistaken for real causal links.

The wrong answer: We trust the AI not to discriminate.

The right answer: The factors that lead to each decision are known and explained, so human reviewers can detect any biases.


5. How do you know if your live cases are outside your AI solution safe limits?

Issue: AI will only produce reliable outputs if the inputs are within the limits it was designed, trained, and tested on. If customer behaviour drifts outside these limits then the outputs can no longer be trusted.

The wrong answer: We assume the distribution of the data does not change.

The right answer: We monitor the distribution of the data (input and outputs), with alert thresholds set by stress testing the solution.


6. How can humans review your AI solution outputs effectively and efficiently?

Issue: Human analysts and QA can only effectively review decisions if they know how they were reached. If a black-box AI does give a clear explanation then human processes will have to recreate the entire investigation in order to check the result, reducing ROI.

The wrong answer: Analysts are just given decisions to review.

The right answer: Reviewers are given a human-readable explanation of the decision. Where no decision was possible they are given guidance on what outstanding issues they need to investigate


7. How do you resolve contested and wrong decision outputs from your AI solution?

Issue: Machines will not always get things right. When problems are found there has to be a process to determine the cause and address the issue.

The wrong answer: It is not clear how decisions are reached, so the institution has to take them or leave them.

The right answer: There is an audit trail for each decision, back to the labelled data used to train the model, so decision processes can be recreated and fixes tested.


Further reading

If you’d like to read more on the subject of trust in AI, you can find our paper ‘Trusting ML in Anti Money Laundering: A risk-based approach’ below: