Trust in AI is becoming our biggest challenge

PUBLISHED ON

I recently spent an enjoyable and thought-provoking evening with colleagues from a global bank discussing innovation in Financial Crime and Compliance. AI featured heavily in our discussions; particularly the issue of Trust in AI. The discussion was prompted by recent work into the safety of AI systems in medical decision making by researchers such as Luke Oakden-Rayner.

The conclusions from Oakden-Rayner’s latest research makes sobering reading for any AI practitioner. He recognises the promise of the technology as applied to medical imaging (specifically PET/MRI denoising) but questions its ability to truly (and safely) recognise disease. Interestingly he cites the criticality of model training from human experts in any AI application. Those human experts are currently the safety net to assess flaws in images and thereby identify missed disease traits. His view is that safety can only be achieved by focusing on diagnostic traits (how humans think) as oppose to simply identifying visual similarities. He is an active supporter of trialling technology but right now his trust in AI is low.

This is particularly relevant to Financial Crime and Compliance (FCC) practitioners because it adds more fuel to an already simmering debate in the AI research community on trust and the foundations of current approaches. With many Banks now driving full steam ahead with AI initiatives across FCC, we cannot compromise on AI safety. The industry needs to deliver transparent AI solutions that can be interpreted by humans ‘in the loop’ whilst delivering rigorous model validation and ongoing monitoring in the live environment.

If AI adoption is truly to go mainstream in FCC then putting humans at the core of advancement will continue to take us beyond narrow AI applications. An increased presence of initiatives that support explainability and transparency (to humans) is an important step in building trust and supporting validation across a community in which safer crime identification is critical. That coupled with increased education and support from practitioners and regulators will help us to build a greater understanding of the real potential in a world still so heavily reliant on human workforce’s struggling to keep pace with the technological advancement in financial crime.

We’ll be sharing more insight to our own thinking on Trust in AI over the coming months but for further reading on the subject. I’d highly recommend the book ‘rebooting AI’ by Gary Marcus (a trained neurologist) which shines a light on the perceived weaknesses of deep learning and the lessons the field can borrow from the human mind.