With the use of Artificial Intelligence in anti-financial crime and other areas predicted to boom, United States federal regulators are scratching their heads and wondering what it means for compliance. They have asked for your help through a public Request for Information. Here is what they are wanting to know.
Illustration by Mike MacKenzie via www.vpnsrus.com
As GRC World Forums has reported in recent weeks, we are widely believed to be on the cusp of a dramatic boom in the use of Artificial Intelligence and machine learning in regulatory technology. A report by Juniper Research earlier this month predicts spending on regtech will surge from $33billion in 2020 to $130bn in 2025.
This increase will be fuelled by the use of technology to meet Know Your Customer requirements and to onboard new customers. In addition, AI, or machine learning, can also be used to flag unusual transactions, augment risk management controls, personalise customer services, feed into credit decisions, and improve cyber security by helping to detect malicious activity.
This tech revolution is also, unsurprisingly, attracting the interest of regulators. The European Union, if reports are to be believed, is considering a set of rules to restrict the use of AI, particular for mass surveillance, ranking social behaviour, manipulating human behaviour or exploiting information about individuals or groups.
In the United States, no less than five federal regulators (see box below) have now come together to issue a public Request for Information (RfI) on the use of AI and machine learning by financial institutions.
The five US regulators asking for help:
Department of the Treasury - Office of the Comptroller of the Currency
Board of Governors of the Federal Reserve System
Federal Deposit Insurance Corporation
Bureau of Consumer Financial Protection
National Credit Union Administration
They are seeking information on several areas likely to be affected by the financial institutions’ use of AI, including risk management practices, barriers when developing, adopting or managing AI, and potential benefits to institutions and customers.
The regulators say they intend to use the response to help determine whether clarification is needed to rules around the issue of AI.
Here we outline some of the key area the regulators are asking for comments on:
Explainability
Simply speaking, ‘explainability’ refers to how technology uses the data that is put into it in order to produce outputs and can refer to explainability for overall functioning (‘global explainability’) or for arriving at an outcome in a given situation (‘local explainability’).
Dr Janet Bastiman, Head of Analytics at Napier AI, last month told GRC World Forums that the requirement for explainability is a key regulatory challenge for the technology.
She said: “When you have a human analyst doing an investigation, they can very clearly say why one transaction is different from another or how the volume was different. Historically one of the challenges with AI, particularly when it is machine learning or deep learning, is that you don’t have that explain ability, it gives you a probability ‘’score’”.
The five regulators, in their request for information, say lack of explainability can inhibit the organisation’s understanding of the ‘conceptual soundness’ of the AI, which can increase uncertainties around reliability and increase risk when used in new contexts. It can also make audit and review more challenge and pose challenges to consumer protection laws, such as data protection requirements.
Question 1: How do financial institutions identify and manage risks relating to AI explainability? What barriers or challenges for explainability exist for developing, adopting, and managing AI?
Question 2: How do financial institutions use post-hoc methods to assist in evaluating conceptual soundness? How common are these methods? Are there limitations of these methods (whether to explain an AI approach’s overall operation or to explain a specific prediction or categorization)? If so, please provide details on such limitations.
Question 3: For which uses of AI is lack of explainability more of a challenge? Please describe those challenges in detail. How do financial institutions account for and manage the varied challenges and risks posed by different uses?
Risks from broader or more intensive data processing and usage
“Garbage in, garbage out” is an old computer science phrase going back decades and with the use of AI it is has never been truer.
AI systems necessarily require large amounts of data, including data to ‘train’ the systems initially in order to identify patterns. Data quality is therefore crucial, because, as the regulators Request for Information (RfI) points out “if the training data are biased or incomplete, AI may incorporate those shortcomings into its predictions or categorizations”.
Institutions will therefore have to be especially careful to ensure data quality, especially if they are moving data out of to implement technological solutions.
Question 4: How do financial institutions using AI manage risks related to data quality and data processing? How, if at all, have control processes or automated data quality routines changed to address the data quality needs of AI? How does risk management for alternative data compare to that of traditional data? Are there any barriers or challenges that data quality and data processing pose for developing, adopting, and managing AI? If so, please provide details on those barriers or challenges.
Question 5: Are there specific uses of AI for which alternative data are particularly effective?
Overfitting
A particular form of poor data quality is when “overfitting” occurs. “Overfitting” refers to when a machine or algorithm learns from a pattern that is prevalent in training data but not in the real dataset as a whole, meaning when the machine learning is applied for real it could lead to inaccurate predictions or findings.
The challenges around overfitting have been long recognised as a challenge for AI adoption. In 2017, for example, The Institute of International Finance said in a report: “Banks have sometimes also experienced that machine learning can be hard to apply, as methods can be complex and models sensitive to overfitting the data. Thereby, the quality of data within banks is not always fit enough for advanced statistical analysis, while banks are not always able to consolidate the data from across the financial group, among others, due to inconsistent data definitions across jurisdictions and the use of multiple systems”.
The federal regulators are therefore keen to understand to what extent banks are on top of this risk.
Question 6: How do financial institutions manage AI risks relating to overfitting? What barriers or challenges, if any, does overfitting pose for developing, adopting, and managing AI? How do financial institutions develop their AI so that it will adapt to new and potentially different populations (outside of the test and training data)?
Cybersecurity risk
No technology is immune from the cybersecurity threats and AI, machine learning and deep learning are no exceptions.
One method involves cyber- attackers “poisoning” data that goes into the training stage so that the system does not produce useful results.
In 2019, Marcus Comiter wrote: “This represents a new challenge: even if data is collected with uncompromised equipment and stored securely, what is represented in the data itself may have been manipulated by an adversary in order to poison downstream AI systems. This is the classic misinformation campaign updated for the AI age.”
Question 7: Have financial institutions identified particular cybersecurity risks or experienced such incidents with respect to AI? If so, what practices are financial institutions using to manage cybersecurity risks related to AI? Please describe any barriers or challenges to the use of AI associated with cybersecurity risks. Are there specific information security or cybersecurity controls that can be applied to AI?
Dynamic updating
One of the key benefits of some forms of AI and machine learning is the ability for the model to learn and evolve over time.
These can, in theory, drive huge cost benefits and improve accuracy of data.
“Among the most effective weapons available are advanced risk-rating models. These more accurately flag suspicious actors and activities, applying machine learning and statistical analysis to better-quality data and dynamic profiles of customers and their behaviour,” said a report by McKinsey in 2019. Such models can dramatically reduce false positives and enable the concentration of resources where they will have the greatest AML effect.”
However regulators are also keen to understand what this means for the tracking of results over time. If an AI approach was initially independently reviewed, how do you know the results of the review are still valid if the system has evolved? How do you ensure performance thresholds are still relevant?
Question 8: How do financial institutions manage AI risks relating to dynamic updating? Describe any barriers or challenges that may impede the use of AI that involve dynamic updating. How do financial institutions gain an understanding of whether AI approaches producing different outputs over time based on the same inputs are operating as intended?
AI use by community institutions
The regulators also recognise that financial institutions come in different shapes and sizes and speculate on whether a community institution, presumably such as a credit union, is more likely to different challenges. Perhaps they are more likely to use third parties, raising questions about the level of in-house expertise needed.
Question 9: Do community institutions face particular challenges in developing, adopting, and using AI? If so, please provide detail about such challenges. What practices are employed to address those impediments or challenges?
Oversight of third parties
There is already extensive guidance from the authorities on the use of third-parties generally in banking in the US, covering risk management, oversight, contract negotiation, planning and independent reviews and more. However the regulators want to hear more about how this relates to AI specifically.
Question 10: Please describe any particular challenges or impediments financial institutions face in using AI developed or provided by third parties and a description of how financial institutions manage the associated risks. Please provide detail on any challenges or impediments. How do those challenges or impediments vary by financial institution size and complexity?
Fair lending
While not related directly to financial crime, the regulators have a host of questions about whether AI could present challenges to compliance with laws and regulations aimed at consumer protection.
Question 11: What techniques are available to facilitate or evaluate the compliance of AI-based credit determination approaches with fair lending laws or mitigate risks of non-compliance? Please explain these techniques and their objectives, limitations of those techniques, and how those techniques relate to fair lending legal requirements.
Question 12: What are the risks that AI can be biased and/or result in discrimination on prohibited bases? Are there effective ways to reduce risk of discrimination, whether during development, validation, revision, and/or use? What are some of the barriers to or limitations of those methods?
Question 13: To what extent do model risk management principles and practices aid or inhibit evaluations of AI-based credit determination approaches for compliance with fair lending laws?
Question 14: As part of their compliance management systems, financial institutions may conduct fair lending risk assessments by using models designed to evaluate fair lending risks (“fair lending risk assessment models”). What challenges, if any, do financial institutions face when applying internal model risk management principles and practices to the development, validation, or use of fair lending risk assessment models based on AI?
Question 15: The Equal Credit Opportunity Act (ECOA), which is implemented by Regulation B, requires creditors to notify an applicant of the principal reasons for taking adverse action for credit or to provide an applicant a disclosure of the right to request those reasons. What approaches can be used to identify the reasons for taking adverse action on a credit application, when AI is employed? Does Regulation B provide sufficient clarity for the statement of reasons for adverse action when AI is used? If not, please describe in detail any opportunities for clarity.
Other questions
Question 16: To the extent not already discussed, please identify any additional uses of AI by financial institutions and any risk management challenges or other factors that may impede adoption and use of AI.
Question 17: To the extent not already discussed, please identify any benefits or risks to financial institutions’ customers or prospective customers from the use of AI by those financial institutions. Please provide any suggestions on how to maximize benefits or address any identified risks.
Register for the latest financial crime news and analysis straight to your inbox
No comments yet