Capital market regulators need to be specific about which artificial intelligence (AI) use cases require a human decision-maker and ensure companies have robust governance frameworks, a panel of experts said at Alberta Securities Commission’s annual event in Calgary last week.
In the financial industry, AI is useful for research, predictive analytics, drafting documents and the automation of repetitive work, among other tasks, said Chad Langager, CEO of Edmonton-based investment research firm AlphaLayer. For example, AI can predict credit rating changes for fixed income investments.
However, AI shouldn’t be used to displace human decision-making, Langager said. “When you pair a human with [AI] models, you tend to get very significant outperformance than either one of those two things in isolation.”
Human involvement in AI processes exist on a spectrum depending on the performance of the model and the cost of an error, Langager added. While deploying large sums of capital will require human involvement, credit scoring only needs supervision to avoid issues like bias and high-frequency trading can be automated completely.
Although not every AI application requires human input, Canada’s Artificial Intelligence and Data Act (AIDA) currently recommends human oversight for every situation, said Martin Petrin, an associate professor of law at Western University. And guidance that’s too restrictive can stifle innovation.
“We don’t need to understand something fully in order to regulate it effectively,” Petrin said. Regulators could rely on disclosures, risk management policies and third-party certifications, for example.
The law is fundamentally a due diligence framework, said Carole Piovesan, lawyer and managing partner at INQ Law in Toronto. AIDA is a guidance framework to inform companies what assessments they must perform to demonstrate due diligence.
Many firms already have data governance and risk management guidelines that can be adapted to AI use, Piovesan added. Firms just need to bridge the gap between their intended AI uses and their existing policies to account for AI.
Board members also need to understand AI sufficiently to challenge decisions made by management and ask the right questions, Petrin said.
“If you have this governance framework, you give the teams in the organization the ability to start pursuing [AI],” Langager said. “Governance can remove the barriers to adopting AI.”
In October, the Ontario Securities Commission published a report with Ernst & Young LLP on the state of AI usage in capital markets. The report described the technology as being at an “intermediate” stage.