Limited resources for AI adoption, low barriers of entry for criminals and challenges in Canada’s financial regulatory framework all magnify AI risks for this country’s financial sectors, according to a recent workshop on financial crimes and AI sponsored by the Global Risk Institute and Financial Transactions and Reports Analysis Centre of Canada (FINTRAC).
The 60 Canadian and international experts in financial crime and AI who participated in the workshop often perceive fraudsters as being “10 steps ahead,” while financial institutions remain largely reactive, a report in early November said.
Participants saw AI use by criminals as most likely to increase fraud (89%), followed by money laundering (9%) and insider threats (2%). While they also said AI investment, appropriately applied, can improve the effectiveness of detecting and fighting financial crime, they also pointed to obstacles to adoption, with nearly half (48%) citing cost and resource constraints as the biggest.
Limited resources
While organizations are currently focused on using AI to cut costs, the participants emphasized that in order to deter financial crime, the focus should be on increasing effectiveness first, before concentrating on AI investments that enhance efficiency.
Another key takeaway was that firms must be purposeful in their adoption of AI to deter financial crime. Each AI tool should be adopted on clear business cases with measurable outcomes. “The wrong question is ‘How do I adopt AI?’ The right question is ‘How do we use AI to help our business?’” the report said.
Limited access to AI expertise may also hinder the deployment of appropriate technologies and restrict institutions’ ability to evaluate AI adoption. In addition, AI’s replacement of entry-level employees narrows the skill development pipeline for future talent experienced in fighting financial crime, which has historically benefited from an apprenticeship model.
Low barriers to entry for criminals
While organizations grapple with how to adopt AI, the technology’s accessibility has lowered entry costs for criminals. Even those with limited skills can now become a cyber threat. This means more frequent attacks, improved ability to conceal information and a broader reach, such as by reducing the language barrier, which was previously a significant obstacle to international financial crime. Enhancing the ability of foreign criminals means they will face lower detection and prosecution rates as information sharing across borders is limited.
AI-enhanced cyber threats often use social media manipulation and enable criminals to deliver personalized fraud attacks on individuals, making these events increasingly malicious
and harder to detect. While it’s cheap to carry out an AI-enabled attack, sanctions evasion or money laundering, detection requires a disproportionately higher expense by financial institutions.
Regulatory challenges
Canada’s federal structure and constitutional division of powers mean that no single jurisdiction has the authority to follow through from detection and reporting to prosecution and conviction. The decentralization makes coordination and information sharing more difficult, participants noted.
As criminals are quickly adopting AI, Canada’s financial crime regulatory regime has also changed, but experts were concerned with the industry’s ability to keep up with threats and regulation. Some participants suggested regulatory sandboxes would help institutions test AI tools without fear of penalties.