The Danish Financial Supervisory Authority has published its good practice guidance on the use of artificial intelligence in the financial sector on 29 May 2024.
To support financial institutions in developing and using artificial intelligence in a safe and sound manner, the Danish FSA has published a set of recommendations on what financial institutions should consider when applying AI technology. The purpose of the most recent guidance is not to serve as a basis for regulatory control, but to highlight the areas of concern which should be taken into consideration when designing and implementing AI technology in the automated procedures of financial institutions. It should be read in conjunction with the recommendation on the use of machine learning in the financial sector published by the Danish FSA in 2019, and the memorandum of the Danish National Bank on the five areas of concern in relation to AI and machine learning in the financial sector from 2022. These are by design narrower in scope than the general EU AI Act of 2024 which seeks to establish a common legal framework for the use of AI in the EU by taking a risk-based approach.
Financial institutions have widely embraced the use of artificial intelligence and are testing the technology's potential applications in a wide range of areas. The Danish FSA maintains that the use of AI also entails a number of inherent risks that companies should be aware of and need to address in the course of developing and implementing AI technology in the financial sector. While the individual use cases and risk profiles will vary widely, the overarching principles should act as guide posts for the parties.
"Financial organisations should of course explore the possibilities of using AI in their business, and we want to help companies do this in the best possible manner to avoid unnecessary risks. That's why we are now providing a guidance and recommendations on how AI technology can be used effectively and safely for both companies and citizens," states Rikke-Louise Ørum Petersen, Deputy Director of the Danish Financial Supervisory Authority.
The Danish FSA states that financial institutions should consider how AI models are trained and retrained and what balance they want between high performance and explainability, i.e. being able to explain how the model has arrived at its results. Companies must also be able to identify and address any new risks that may arise when using AI.
The new guidance from the Danish FSA discusses good practices for using artificial intelligence in the financial sector, specifically focusing on governance, model management, and clarity. It emphasises the need for financial institutions to implement AI safely and to manage the associated risks. While the financial institutions are closely regulated by inter alia the Danish FSA, the guidance is equally important for non-regulated developers and suppliers of IT solutions designed for the financial sector. Moreover, the overarching principles will require that both risk management and compliance professionals form an integral part of the development process.
In terms of governance, companies should have proper systems in place to create an overview of their use of AI and consider how to manage specific risks associated with AI-based models. They should also establish an approach to risk analysis and classification of models, regularly review their risk identification and mitigation approach, and assign responsibility for specific models to employees or units.
For model management, financial institutions should assess how AI-based models differ from other models currently used and develop a plan for re-training and regular validation. They should also ensure that they have sufficient internal resources available, including for model validation, and have a policy in place for managing and securing different versions of their models. Effective data management, including storage and access, is also considered to be key by the Danish FSA in terms of best practice and safe usage of AI.
Financial institutions need to balance both performance and explainability when using AI. They should consider the trade-off between the two and involve a cross-functional team in making decisions. It is the view of the Danish FSA that understanding the model's results or bias is important, and relevant usage scenarios should be assessed. Documentation of mitigation choices and considerations is necessary, and financial institutions should choose AI models that are inherently easier to explain when possible.
The Danish FSA maintains that explainability is important when using AI for several reasons:
Transparency and Accountability: "Computer says no" will not suffice. AI systems are increasingly being used in critical decision-making processes, such as loan approvals, insurance claims, and even hiring. When AI algorithms make decisions that impact individuals or society as a whole, it is important to be able to explain how and why those decisions were made. Explainability helps ensure transparency and accountability, allowing stakeholders to understand the reasoning behind AI-driven outcomes. If e.g. an insurance company uses AI to review and either approve or deny coverage for claims, then the added benefit of being able to automatically process claims must be weighed against the need to be able to explain the outcome in a reasoned manner.
Trust and Acceptance: Lack of understanding and trust in AI systems can lead to skepticism and resistance. Explainability helps build trust by providing insights into how AI models work and how they arrive at their conclusions. When users and stakeholders can comprehend the decision-making process, they are more likely to accept and trust the AI system's outcomes.
Detecting and Addressing Bias: AI models may inadvertently perpetuate biases present in the data they are trained on. Explainability allows for the identification and mitigation of bias by enabling stakeholders to examine the underlying factors and variables that influence the model's decisions. It helps in detecting discriminatory patterns and taking corrective actions to ensure fairness and equity.
Compliance and Regulation: In several industries, there are emerging legal and regulatory requirements for explainability in AI systems. For example, in sectors such as finance and healthcare, some regulations may mandate that decisions made by AI models must be explainable to ensure compliance with laws and regulations. Explainability helps financial institutions to demonstrate compliance and to avoid legal and ethical issues.
Error Detection and Debugging: Explainability aids in error detection and debugging of AI models. When an AI system produces unexpected or incorrect results, explainability allows stakeholders to trace back the decision-making process and identify potential flaws or errors. This helps in improving the model's performance and reliability.
Human Oversight and Intervention: In certain critical applications, human oversight and intervention are necessary. Explainability enables humans to understand the AI system's reasoning and intervene when necessary. It allows for human experts to validate the model's decisions, provide additional context, or correct errors, ensuring that AI is used as a tool to augment human decision-making rather than replace it entirely.
Explainability is not only key for ensuring transparency, trust, and fairness, but also for compliance, error detection, and most importantly human oversight in AI systems. It helps bridging the gap between the complexity of AI algorithms and the need for human understanding and control, making the use of AI technology by financial institutions both accountable and by implication beneficial for individuals and society as a whole. While automation is an important feature of financial systems, ensuring that the outcome remains correct will and should always be the responsibility of the parties using AI in their operations.
Overall, the the AI guidance of the Danish FSA provides an important guidance for financial institutions on how to safely and effectively design, implement, and use AI, and manage associated risks in their deployment of AI tools. The guidance is espacially important in an emerging field such as AI, in the context of the financial sector, and given the rapid development, a principles-based approach will provide a better long-term guidance going forward. One thing will be certain, the rapid development means that a deployment of AI based on a principled approach will ensure greater compliance with the main principles underlying financial regulation.
By addressing the guiding principles and the highlighted risks in the early development phase, developers and users of AI financial solutions should endeavour to ensure that the costs of development are not misapplied by neglecting to adhere to the guiding principles in the course of the project. The highlighted principles must be viewed as overarching principles and the Danish FSA recognises that no two situations or use-cases will be alike, which is why the regulator has taken a principles-based approach to the emerging field with a long-term view to guide the development in a safe and sound manner.
Below, we have attached a machine-translated version of the original Danish FSA AI guidance in English.
English Version
Danish Version
Comentários