Existing regulations around the use of artificial intelligence (AI) by the financial sector may not be enough to prevent the systemic risks posed by its use, warns a new report from the Organization for Economic Cooperation and Development (OECD).
The Paris-based group said that governments, regulators and industry firms need to step up their efforts to address the challenges posed by the growing use of AI.
“As AI applications become increasingly integrated into business and finance, the use of trustworthy AI will become increasingly important for ensuring trustworthy financial markets,” the report said.
Technology promises a number of potential benefits — including increased efficiency, improved client experiences and greater financial inclusion — but the report said that AI growth also “raises unique challenges to privacy, autonomy, transparency and accountability, which are particularly complex in the financial sector.”
Moreover, increasingly complex and opaque AI algorithms “could amplify existing risks in financial markets or give rise to new risks,” the OECD warned.
In the absence of adequate transparency, governance and accountability for AI systems, their use could introduce biases, generate herding behaviour or intensify market concentration, which could undermine market integrity and stability, the OECD said.
Prevailing regulatory frameworks “may fall short of addressing systemic risks presented by wide-scale adoption of AI-based fintech by financial firms,” the report added.
Regulators must consider either adapting their existing rules, or creating new ones, to keep pace with technological advances in AI. As the report said, it’s key to strike “the right balance between managing risks and supporting innovation.”