Today, there are a growing number of financial institutions leveraging machine learning (ML) and artificial intelligence (AI) across all areas of banking — from the branch to the back office. While larger banks and credit unions may build their own solutions, many find it easier and less costly to partner with an outside fintech that produces the necessary technology.
Like all banking systems, the models that power these technologies must be validated, monitored and tested for accuracy and risk. Models can be susceptible to bias, whether unfairly ignoring or targeting everything from zip codes, to titles, to gender and/or ethnic backgrounds. If this should occur, FIs could find themselves liable to both the public and regulators. Though the sophistication of these systems can make evaluation difficult, most institutions are quickly realizing the importance of model validation, especially as regulatory scrutiny of these platforms has grown. And while many fintechs creating these systems may be content to remain behind the scenes, the issue is just as important for them (and their future FI relationships).
In reviewing these technologies, institutions are applying the “tried and true” rule of risk management, which is “trust, but verify.” Furthermore, they are also realizing that it is simply not enough to fall back on the idea that if they have a vendor covering their technology gap, any issues will be viewed as the fault of that vendor alone. If a problem occurs, it is the institution that is held accountable. Most FIs are keenly aware that their customers, regulators and shareholders do not care if a third party made a mistake – the relationship is with the FI and it is with the FI where the blame resides.
In turn, institutions are putting more pressure on AI/ML vendors to ensure that their systems are both operating properly and are in compliance. And regulators are looking too. They are becoming increasingly aware of the platforms and the possible issues that can arise, and we are beginning to see a push for more oversight and guidance (like the FDIC’s recently issued request for information (RFI) on “Standard Setting and Voluntary Certification for Models and Third-Party Providers of Technology and Other Services).
A key factor many fintechs often underestimate is that regulators can come to them directly if they have concerns about their work with financial institutions. Furthermore, if a suspected issue occurs as the fintech is rapidly adding new FI clients, regulators take even more notice and tend to act that much faster, which could include auditing the fintech itself – something that often triggers audit reports and expected remediation. For fintechs that are unable to reach satisfactory compliance when the regulators return, they then risk having the unresolved issues reported directly to their customers. As part of a risk-averse industry, this can result in an exodus of FI clients (and in some cases, FIs can even be ordered by regulators to terminate relationships with certain vendors).
Banks and credit unions operate under set regulatory standards and the fintechs that work with them must understand that they are held to these same responsibilities. From a regulatory standpoint, guidance for model validation and evaluation is only going to become more standardized and formalized moving forward, so both FIs and fintechs need to be thinking about it now, or they will have a much bigger gap to close later.
So where to start? As with cyber security, third-party testing and validation for these models and platforms is the best way to ensure their integrity and proper function, all of which requires a specialized analysis. The good news is that it shares similarities with other forms of vendor risk management that may already be in place. For example, fintechs working with FIs are already expected to have a Service and Organizational Controls (SOC) report, Business Continuity Management (BCM) Program, Information Security Program, cyber resilience, etc. If a solution is using model-making decisions on behalf of an FI (and its customers), model validation will need to be added as part of the due diligence package. Additionally, if it is tasked with making decisions, it cannot exist within a “black box.” This is critical considering several of the recent stories highlighting the issues that can arise when models act unexpectedly or in error. Bottom line – model validation is important and FIs employing AI or machine learning solutions should already be conducting it and doing so often. In the increasingly automated world of digital banking, it is quickly becoming part of the cost of doing business as these technologies become more and more prevalent. With new regulations constantly evolving, fintechs must stay on top of this guidance to ensure that they build it into their models, along with ways to then test and demonstrate that it is functioning properly. As we move forward, AI/ML will only become more deeply entrenched in all facets of banking, representing an important new due diligence component for the industry.
Wipfli LLP is a leading national accounting and consulting firm serving clients across a diverse spectrum of industries, including financial institutions, services and technologies.