A bankers guide to AI Part 1 How transparent is the AI? 

AI and Banking.001

 

This is a 5 part series (published weekly) written by guest author Amber Sutherland. AI in Banking is a massive subject that needs somebody who understands both subjects. We found that person in Amber Sutherland who  has over fifteen years of experience in finance, business development, and regulatory technology. She worked for a large global bank for a decade, and has experienced first hand the evolution banks go through in their digital transformation and compliance journeys. This experience led her to move to regulatory technology, where she helped a former client enter the UK market. She then joined Silent Eight to help enter the EMEA market and grow their business in EMEA and APAC. Silent Eight is an AI-based name, entity and transaction adjudication solution provider to financial institutions.  

After almost a decade working in a large, global bank, I can speak to the challenges faced by all three lines of defense in trying to combat financial crime. I can also attest to the effect these processes had on our clients.  As a front-line corporate relationship manager, I frequently had to navigate the KYC, remediation and payment screening process for my clients. 

Not only was this an incredibly time consuming and frustrating process on an organizational level, but more painful was the deleterious effect it had on our clients and their business: crucial payments to vendors were delayed unnecessarily, accounts took months to open, and required incessant back and forth between multiple parties, and account fundings/transactions always came down to the wire because of basic due diligence, regardless how much work you tried to do ahead of time.

Much of the process that required our intervention seemed mundane, repetitive and inefficient, which compounded everyone’s frustration. 

Sound familiar?

These types of repetitive, mundane tasks are ideally suited to be outsourced to artificial intelligence, which the industry seems to now realize. 

Artificial intelligence can be an incredibly valuable tool, in that it can offload mundane tasks, provide insight into customer and employee behaviour, create more standardization, and help reduce or manage costs.

But as technology becomes increasingly sophisticated, there are many factors to weigh in the decision making process. 

After countless conversations with stakeholders and decision makers in the industry, I have learned that there are 5 main concerns when implementing regulatory technology, especially AI technology, in the financial sector.  

Each of these 5 concerns will be addressed in separate articles on Daily Fintech in the coming weeks:

  1. How transparent is the AI? Today
  2. What if the AI learns the wrong behaviours, such as bias?  Next week (5 August)
  3. Does it have more than one purpose? What is the roadmap? (12 August)
  4. Is it better than what I have now? More accurate, faster, more standardized, more cost effective? Can ‘better’ be tested quantifiably? (19 August)
  5. What are the redundancies? How will this technology affect my operational resiliency? (26 August)

How transparent is the AI? 

While this seems like a straightforward question, “transparent” really encompasses three separate factors:  

  1. Will my team and our stakeholders be able to understand how it works? 
  • Will I be able to easily demonstrate to Audit, the Board and regulators that it’s doing what it’s supposed to do?
  • Can I get a snapshot of what is happening at any given moment? 

All of the major regulators have stipulated that artificial intelligence solutions be explainable and demonstrable. Both of these concepts are rather self-explanatory but still worth exploring.

Explainability 

It’s not sufficient for your compliance team to understand how the AI makes decisions. They also need to be comfortable explaining the process to key stakeholders, whether they are board members, the internal model committee, audit, or the regulators. 

If your technical team can’t understand the technology or how decisions are made, or if the vendor claims confidentiality to protect their IP, this is a cause for concern.

Demonstrability

Like transparency, demonstrability captures a few components – it means you have to be able to demonstrate:

  • What decisions the AI has made?; 
  • What changes you’ve made to how the AI makes decisions?; and
  • Who made the changes?.

This is where an audit trail comes into play. First of all, is there one? And if so, is it immutable, and does it capture all actions in the AI or just some of them? Is it exportable in report format and, if so, is the report readable and can it be easily understood?

Compliance is a data-driven world, and the risk associated with being deemed non-compliant is substantial. Being able to capture and export changes to, and decisions made within your AI is crucial to your relationships with your stakeholders.

As personal liability expands in the corporate world, board members and committees increasingly require an understanding of not only how compliance risk is being mitigated, but also clear evidence that it’s being done, how and by whom.

Index to future posts in this series:

A bankers guide to AI Part 2. What if the AI learns the wrong behaviours, such as bias?  

A bankers guide to AI Part 3. Does the AI have more than one purpose? What is the roadmap?

A bankers guide to AI Part 4. Is it better than what you have now? 

A bankers guide to AI Part 5. What are the third-party dependencies? How will this technology affect my operational resiliency?

Next Week: A bankers guide to AI Part 2. What if the AI learns the wrong behaviours, such as bias?  

Daily Fintech’s original insight is made available to you for US$143 a year (which equates to $2.75 per week). $2.75 buys you a coffee (maybe), or the cost of a week’s subscription to the global Fintech blog – caffeine for the mind that could be worth $ millions.

Start the conversation at Daily Fintech Conversations

One comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.