Big data
iStockphoto

In the face of massive ongoing investment into AI, the ability of financial sector regulators to oversee the deployment of AI tools — which pose a potential threat to financial stability — is lagging behind, says the Financial Stability Board (FSB).

In a new report, the FSB examines the state of AI oversight in the financial sector, and sets out recommendations for policymakers to consider to enhance monitoring, and tackle the challenges faced by regulators. 

“While financial authorities have made progress in understanding AI use cases and their benefits and vulnerabilities, monitoring efforts are still at an early stage,” the report said. 

In a previous report, the FSB highlighted an array of risks posed by AI adoption, which could have implications for financial stability — including exacerbating market correlations, cyber risks and challenges in model risk and governance.

Its new report also highlights the risk posed by financial firms’ reliance on a small group of third-party service providers, which “expose financial institutions to operational vulnerabilities, and the growing use of generative AI could lead to critical third-party dependencies.”

In particular, generative AI (designed to produce outputs) is highly dependent on a small number of essential suppliers for specialized hardware, cloud infrastructure and pre-trained models, it noted. 

“This heavy reliance can create vulnerabilities if there are few alternatives available,” it said. 

And, for financial sector regulators, their efforts to monitor these risks also face an array of obstacles, it noted — including a lack of data, limited transparency and the rapidly-evolving nature of AI systems.

“Respondents to [a] member survey highlighted challenges such as a lack of agreed definitions for AI, difficulties in ensuring comparability across jurisdictions, challenges in assessing the criticality of AI services, as well as the cost and scope of monitoring,” it said.

Against that backdrop, the FSB called on regulators to, “enhance their monitoring approaches”, and it sets out a range of possible indicators — including direct and proxy indicators — for regulators to use to monitor AI adoption and possible risks to the financial system.

“These can be collected through surveys, outreach, supervisory engagement with regulated entities, leveraging publicly available and vendor data, and existing supervisory frameworks,” it said.

“Monitoring efforts could benefit from exploring cost-effective approaches that are representative, mapped to identified vulnerabilities, timely and aligned, where possible, with relevant standards,” it said.