What is the issue?
As the use of artificial intelligence (AI) within financial institutions becomes more widespread, new challenges are posed in terms of understanding the output of AI analysis. A Risk.net article published recently discusses the challenges with interpreting these new types of models.
What does the article say?
The article "No silver bullet for AI explainability" points to the increased use of neural networks to automate tasks in areas such as options hedging and credit card lending within the banking sector. Such models include interactions between layers which cannot often be traced, therefore making explainability a real issue. Similar techniques are increasingly being used within the insurance industry; for example, insurers are making use of machine learning in the fields of pricing and underwriting, and in order to obtain in-depth insights from their data sets.
The ability to explain AI output is important for a number of reasons; ensuring that models are reliable, transparent and understood, ensuring that any biases can be identified, and ensuring that regulators are confident of capabilities within the business to understand the approach.
A paper by Caenazzo and Ponomarev, "Interpretability of neural networks: a credit card default model example," finds that popular approaches to explaining neural networks each have their own strengths and offer different insights. Therefore no particular technique is superior, and a combination of techniques can be the most information-yielding approach.
Why is this relevant to the risk team?
Risk teams need to develop new expertise to ensure that use of such models is not introducing excessive risk into the business model; risk teams are likely to be asked to give risk opinions on whether or not an AI approach should be adopted and to validate on an ongoing basis whether the risk posed by the use of such tools remains within risk appetite. Staying abreast of the interpretation techniques used by the business and more widely by researchers will help ensure that the risk team is able to get feedback on the extent to which model results are robust, traceable and defendable. In turn, this should contribute to the avoidance of undue risks of reputational damage, errors, mispricing or inappropriate business decisions being made.
Even for firms not currently employing AI techniques, the issues described here could rapidly become relevant once there becomes a business and strategic focus on adopting AI-based tools. Therefore it is key that risk teams start considering now how to identify, manage and mitigate AI-related risks, particularly given that assessing AI risks will typically require a different set of skills compared to the knowledge base traditionally found within risk teams.
Relevant links
Emerging risks and opportunities in insurance: Technology and innovation
Emerging risks in insurance: Job automation