What FCC executives need to know about Model Risk Management

PUBLISHED ON

Why FCC executives need to care about model risk management.

The European Banking Authority (EBA), Basel Committee on Banking Supervision (CBS)Bank of England and Federal Reserve all now mandate that financial institutions evaluate and manage their exposure to risks due to the use of quantitative models in their operations. These guidelines were originally introduced to manage exposure to market risk, but the same principles and practices are now being extended to ensure model risk management for models used in other areas of operations – and that increasingly means Anti-Money Laundering (AML) and Counter-Terrorism Financing (CTF) compliance.

As of 2018, 35% of 59 financial institutions surveyed used machine learning to automate processes in AML, and a further 34% were experimenting with pilot projects (Institute of International Finance). Most, if not all, will involve applying models to core  Financial Crime and Compliance operations, and so will be covered by Model Risk Management guidelines.

At the same time, both the Band for International Stability (BIS) and the Financial Action task Force (FATF) mandate that banks take a risk-based approach to AML and CTF – and this applies to model-based systems as much as to the human-based processes they are replacing.

So what do FCC executives need to know about managing model risk in the automated systems they are adopting?

What is a model and what are the risks?

Traditionally, financial institution’s AML and CTF policies were defined as processes and practices that were followed by human analysts. The first wave of automation in AML – including transaction monitoring systems or sanctions screening services – encoded those policies into rules that could be applied automatically, so there was little significant change in the risk. But the next wave of automation is using machine learning (ML) to train models that determines their decisions, rather than fixed rules. These systems have the potential to reduce false positives and increase efficiency – but they also carry a risk.

Wherever machine learning is used in AML there will be a model, and that model will carry risks that must be managed.

Whatever the system, the general process of managing model risk is the same: define the risks, prioritise them, and find mitigations. Our experience deploying solutions in Tier 1 banks has helped us to compile a comprehensive list of risks and mitigations that you can find here. The exact risks presented by each system will differ, but in general they are of two kinds: validity and accountability.

Validity

Validity is the risk that the system will make the wrong decision and, worse, that we may not know if it has made the wrong decision. Performance is usually measured by testing the system on historical examples and comparing its decisions to those of human experts. But although the overall performance of the system may be at or above human levels when tested in this way, this can hide serious problems. The system may make incorrect decisions on high impact cases, decisions may be biassed against specific classes of customer, or models may become out of date as customer behaviour changes. Development teams need to demonstrate that they have identified and evaluated these risks – and mitigated them where possible. In general, they fall into three stages:

  • Conceptual Soundness. Does the design and construction of the model follow our best understanding of the problem? If the model is trained on data labelled by human experts then how do we ensure they are making the correct judgements? How is validity tested? Does it include sufficient edge cases – especially high impact or high risk ones? High net worth individuals or specific geographies might present specific risks, for example, so the institution needs to know how the system performs on those cases in particular. 
  • Ongoing Monitoring. Is the model working as intended on live cases? What are the safe limits of operation of the system, and do changes to customer behaviours or internal data sources take it outside these limits? Is the model affected by discriminatory biasses or confounders? Are there anomalous behaviours for which decisions cannot be trusted?
  • Outcomes Analysis. Is the solution effective? It is hard to determine the ‘ground truth’ in the case of suspected illicit activity, but there must still be an effective mechanism for independently evaluating the decisions of the system. How are discrepancies resolved and learnt from?

Accountability

Accountability is a risk because many ML systems are ‘black boxes’: it may not be clear how they make specific decisions. Effective oversight of, and accountability for, the model depends on knowing how and why each decision was reached in a human-legible way. The details depend on the nature of the system, but as a minimum they should include: 

  1. Audit: If the financial institution cannot recreate decisions – including the training data and state of the
    model – then it cannot provide justifications of them. An effective audit trail requires that all decisions should be linked to a model version, including training data, model algorithm, training regime, and any other priors that affected the decision. 
  2. Explainability: If humans cannot understand why the model made the decisions it did then they cannot be explained to regulators or risk stewards. Nor can human monitors evaluate contested decisions, or detect biasses. The decisions should be explained in a form, and level of detail, such that a non-technical Subject Matter Expert (SME) would agree that the decisions were justified based purely on that explanation – ideally this includes plain language explanations, backed by quantitative evidence.
Model Risk Management is supported by full decision explanations in Caspian AML Investigator.
Screenshot of Caspian AML Investigator showing a full explanation relating to how an automated decision was reached.

Managing model risk

Institutions are increasingly using Model Review Boards to formalise the process of model risk management. But even if there is no review board, it is still the responsibility of the institution’s second line of defence to ensure that the model risk is effectively managed. This need not be a heavyweight process. The essentials are to:

  1. Maintain a list of risks – with priorities and plans for mitigation – this is the responsibility of the team, or vendor, that develops the solution.
  2. Review this list regularly – this is the responsibility of the institution’s second line of defence.

As the FATF guidance states, ‘the risk-based approach is not a “zero failure” approach;… rather, the goal is to ensure that there is a process in place to confirm that everything possible is done to identify, prioritise, and mitigate those risks at an acceptable level.’ Making ‘zero failure’ the only goal discourages honest and thorough testing, and it discourages openness about possible risks – both of which are necessary to ensure risks are identified and addressed. 

Further reading