We are on the cusp of an artificial intelligence regtech spending boom, but AI has to be combined with human expertise if we are to stay a step ahead of the money launderers, argues Guy Harrison.
Financial crime is a multi-trillion-dollar business for criminal organisations. One only needs to look at last year’s FinCEN Files leak to understand the scale of the issue - which has been further exacerbated by the global pandemic amid the surge in online payments and remote onboarding. Criminals have been incredibly resourceful and fast to adapt to the digital world, where illegal money can quickly change hands and cross borders until there’s no clear trail back to its source.
To handle the sheer volume of online transactions, and keep up with the pace of change, financial firms will need to accelerate their transformation programs. A recent report by Juniper Research has found that global spending on regulation technology is set to quadruple between now and 2025, to a staggering $130 billion. Artificial Intelligence (AI) will undoubtedly be a key driver of that growth, as banks look to increase efficiency and reduce costs. But does it come at a risk?
Driving the RegTech revolution
As more and more accounts are created digitally, new opportunities are emerging to automate the customer onboarding process. From screening against government lists, to verifying documents and monitoring transactions, AI has the potential to transform anti-money laundering (AML) efforts.
It’s important to remember that risks are never static – a previously low-risk customer could suddenly be exposed with links to organised crime or added to a sanctions list.
Therefore automated continuous monitoring is quickly becoming table stakes in the fight against financial crime, ensuring that any new red flags affecting the risk profile of customers are flagged immediately - enabling firms to address issues as soon as they arise.
And it’s not only the banks that can benefit from investment in AI. Analysis of filing trends for Suspicious Activity Reports (SARs) from 2014 to 2019 shows that almost 29 million reports were submitted to the regulators in total during that time period, with steady growth each year. The volume of SARs highlights the scale at which AML decision makers, both at banks and the regulators, must work. Both will need to embrace new AI-powered technologies to be able to analyse such vast amounts of data and identify patterns in those data sets that a human could not.
A double-edged sword
While AI can alleviate the manual burden of processing large volumes of data and accelerate the vetting process, it can also create new risks for businesses.
Machines lack the ability to understand contextual or linguistic nuances, increasing the risk of false positives or oversight - which is particularly problematic when they are designed to protect businesses against risk.
“As AI starts to make more decisions on our behalf, being able to explain those decisions to a regulator is imperative”
They are also not smart enough to spot fake or inaccurate news and information. AI systems are ultimately only as good as the data feeding into them - and small flaws will make decisions unreliable. Businesses therefore need to ensure their data is coming from carefully curated, verified and licensed sources - and that’s where human expertise comes in. Machines cannot determine which sources to leverage and how to access them, but subject matter experts know where to find the most reliable information.
As AI starts to make more decisions on our behalf, being able to explain those decisions to a regulator is imperative.
Third-party AI solutions cannot be allowed to operate as a black box - particularly when a decision produces a legal effect. That’s why it is so crucial for a human to bring the wider decisive context. And compliance teams simply cannot afford to get it wrong: misinformation, or missed information, could result in you unwittingly doing business with a sanctioned individual or entity, or facilitating the flow of funds for organised crime.
We can expect to see increased scrutiny on the role of AI - and businesses could face significant fines and severe reputational damage if it is found to be straying from documented policy or treating customers unfairly. So although the future will bring more digital transformation and automation, a responsible compliance program will never be fully dependent on AI. Compliance professionals, and their ability to connect the dots and exercise judgment, will always play a critical role.
How humans and machines can work together
Combating money laundering is an enormous, incredibly complex task. With the sheer volume of new customers onboarded and transactions processed reaching into the hundreds of millions, AI is an essential element to any compliance programme.
Machines are able to scour a tremendous amount of data very quickly, working faster than humans ever could - while optimising continuously. However, human expertise is only becoming more important in the age of AI.
A model isn’t smart by itself - the machine is only as smart as you train it to be. Humans are also able to understand and evaluate the data better than any machine - spotting inconsistencies, and things that AI may have missed.
We call this collaboration between humans and machines “Authentic Intelligence” - and we believe that it is only this combination of human and artificial intelligence that has the power to outsmart the money launderers, and thereby start to tackle some of the world’s most heinous crimes, from terrorism financing to wildlife trafficking and modern slavery - all of which are fuelling the financial crime endemic.
Guy Harrison, General Manager, Dow Jones Risk & Compliance
Register for the latest financial crime news and analysis straight to your inbox
No comments yet