The debate around AI ethics remains an unclear one, despite the many moves by governments and regulatory bodies to put rulings and initiatives in place. Indeed, Forbes recently summarised the varying state of differing countries and recognised ‘that most countries are still in an exploratory stage when it comes to governance and regulation relating to AI’.
That discussion is one for another day but irrespective of such ethics led initiatives, there are many technology companies that are taking the lead and readily recognising the regulatory requirements that demand clear audit trails and reasoning as to why and how a machine makes decisions.
Indeed, when AI is able to explain decisions in a human-readable format, the value it provides to financial services organisations reaches beyond just knowing how the decision was made, especially if the decision is of a complex nature that augments an expert analyst’s judgement.
There are wider benefits to be had from machine based decision insights and Nishanth Nottath of HSBC points to the question that lies at the heart of the rigorous reviews and audits that financial services firms are subjected to: “Did the system do what it was expected to do?” and within that core question lies the critical component of explainability.
Below we share a (not exhaustive) list of the benefits to be had from answering this question with explainable outputs.
Five reasons why explainable machine decisions are critical for financial services firms:
- Regulatory transparency – the ability to know that the right decisions are being taken goes without saying but the only way to ensure transparency and confidence is if you can provide an explanation that is human-readable to others (such as regulatory bodies) as to how and why those decisions are being appropriately reached.
- Quality assurance – the QA function of all financial services firms want to create confidence in the outputs of any process that is in place. Explainable insights to the decision output of machines makes it possible for auditors, stewards, analysts and QA teams to confidently sanity check those outputs rather than just taking them at face value.
- Continuous improvement – decision imperfections will always have some level of occurrence regardless of the maturity of technology. Human understanding of the processes that led to an imperfect decision is an imperative when it comes to recognising and implementing subsequent effective fixes.
- Analyst insight – machine-based decisions will never fully replace a risk analyst because of the sheer understanding of financial transactions that humans have beyond the raw numbers. Explained machine decisions are however a powerful tool for any analyst wanting to speed up the handling of high-volume alerts and gain insight as to where to best focus their valuable attention or follow new avenues of enquiry.
- Bias correction – biases can exist unawares within a system and decisions are often trusted without ever interrogating data for mistakes or misinterpretations. Clear explanations of all factors that may lead to unwanted bias creeping into decisions means that they can be fixed before they have material risk impact.
It will not always be necessary for a machine to fully explain whether it did what it was expected to do but in financial services, the need is much greater, and the value goes far beyond just being able to prove to an auditor that the right decision was taken. Explainable insights have the potential to revolutionise the way analyst workforces operate and help them to focus on the investigations that really matter in financial crime.