This is the second in a 5 part series (published weekly) written by guest author Amber Sutherland a banker who understands technology who currently works for Silent Eight an AI-based name, entity and transaction adjudication solution provider to financial institutions. Click here for Index and Part 1.
The underlying questions here, without detracting from the very serious concern about embedding existing unconscious bias into your AI, are as follows:
- If the AI is wrong, or my requirements change, can I fix it? How easily?
- What impact will tweaking the AI have on everything it’s already learned?
An industry journalist recently asked me if I thought bias was a problem with AI. My answer to her, and to all of you, is that AI simply learns what’s already happening within your organization. As a result, unconscious bias is one of the things that AI can learn, but it doesn’t have to be a problem.
While you can’t really prevent AI from learning from past decisions (that’s kind of the point), good technology should enable you to identify when it’s learned something wrong, and to tweak it easily to prevent bad decision-making from becoming embedded into your AI’s decision making.
This ties in to the need for transparency and reporting. It’s not only necessary to see how decisions are made, you also need to be able to prevent poor decisions or bias from being part of the AI’s education. And all of these things need to be documented.
When testing out new vendors, once the AI engine has been trained initially for your proof of concept, you should be able to clearly understand the findings, and be able to make changes at that time (and thereafter). You will very likely be surprised by some of the ways decisions are currently being made within your organization.
For example, at Silent Eight, our technology investigates and solves name and transaction alerts for banks. This work is typically done by teams of analysts, who investigate these alerts, and close them as either a true positive (there is risk here) or false positive (there is no risk). True positive alerts require substantially more time to investigate and close than alerts deemed to be false positive.
Analysts typically have KPIs around the number of alerts they’re expected to investigate and close each week.
By late Friday, the analysts are doing everything they can to make sure they meet this quota. As a result, it’s not unusual during the AI training process that the AI learns that 4pm on Fridays is a great reason to close out pending alerts as false positives.
Obviously this is a good example of AI learning the wrong behaviour and needing to be tweaked. It’s also a good example of mistaking correlation with causation, which is a topic worthy of its own examination on another day.
Today, as regulations are introduced and amended, you’re continually updating your policies to reflect these changes. It’s no different with artificial intelligence. It’s imperative that your AI engine is correspondingly easy to tweak, and that when you tweak it, you don’t lose everything it has already learned.
Thoughtful, well-architected technology should be built in a manner that makes it easy to update or amend part of the AI engine, without impacting the rest of the learnings. This is something you should both ask about, and test in the POV environment.
Stay tuned next week for Part 3. Does it have more than one purpose? What is the roadmap?
Daily Fintech’s original insight is made available to you for US$143 a year (which equates to $2.75 per week). $2.75 buys you a coffee (maybe), or the cost of a week’s subscription to the global Fintech blog – caffeine for the mind that could be worth $ millions.