digital investing
iStockphoto

With artificial intelligence (AI) evolving rapidly, concern is growing about the ability of regulators to keep up.

The Ontario Securities Commission’s (OSC) new strategic plan, which sets out the commission’s vision for the next six years, highlights AI as a key driver for the markets in the years ahead.

“Artificial intelligence continues to develop, with the potential to impact processes and stakeholders throughout our capital markets. It also raises important questions about managing risk, governance, and the potential for malicious use,” the plan said.

So far, little direct regulatory action has focused on AI in the capital markets — but that’s beginning to change.

Earlier this year, the U.S. Securities and Exchange Commission (SEC) launched enforcement actions targeting “AI-washing” — alleging that a pair of investment advisory firms misled investors by overhyping the use of AI in their products and services.

The SEC foresees AI-washing risks at both investment industry firms and issuers, said Gurbir Grewal, director of the SEC’s enforcement division, at the OSC’s annual conference in May.

While the penalties the SEC imposed in the AI-washing cases weren’t large, Grewal said the regulator brought the cases to signal to the industry, issuers and investors that AI-related claims must be accurate.

On the policy side, the SEC proposed some of the first rules targeting the use of AI and predictive analytics by industry firms.

Last fall, the SEC consulted on rule changes that included provisions dealing with AI-related conflicts of interest, requiring firms to have policies for AI use and setting record-keeping obligations. Those proposals remain under consideration.

The Ontario government, meanwhile, proposed legislation in May that would govern the use of AI by the public sector. But Canadian securities regulators haven’t yet proposed specific rules targeting firms’ AI usage.

In a 2023 report on the state of AI in the capital markets, the OSC flagged concerns such as data-related challenges (e.g., data quality, consistency and security), AI model explainability (how the model generates output), and governance, ethics and regulatory concerns.

The U.S. securities industry is grappling with similar issues, as discussed during the U.S. Financial Industry Regulatory Authority’s (FINRA) annual conference in May. Risks examined included ethics and governance.

“FINRA’s going to ask us for some degree of explainability, and our clients deserve it too,” said Lisa Roth, president of Monahan & Roth LLC, a San Diego-based compliance consulting firm. “Our clients deserve to hear when they’re interacting with AI, what it’s supposed to do, what it does and how that output was determined.”

However, explanations will become more difficult “as the AI becomes more and more sophisticated,” she said.

The OSC report noted that explaining what an AI model is doing and how its outputs are generated is key to establishing trust.

Regulators’ own use of AI is largely at the exploratory stage.

The SEC has begun analyzing how generative AI models could potentially help tackle the regulators’ workload, Marco Enriquez, principal data scientist at the SEC’s division of economic and risk analysis, said at the FINRA conference.

“We’ve found that [large language models (LLMs)] are quite good at giving coding advice,” Enriquez said. “So, we’re exploring this use case.”

AI could also speed up the processing and analysis of the data collected from regulating markets.

Enriquez said the SEC collects plenty of “messy, unstructured data,” and AI models could “tidy up some of that data; remove some of the noise.”

One of the biggest opportunities for AI is market surveillance, said Scott Gilbert, vice-president, risk monitoring, member supervision, with FINRA, at the conference.

“One [potential use case] that we share with the industry is this idea of reducing false positives. Whether it’s financial crime or other types of surveillance routines, there’s generally a very high incidence of false positives in the data,” Gilbert said.

The estimated rate of false positives in surveillance for suspicious activity can be as high as 80% to 90%, he added, and AI could reduce that rate.

AI could also streamline the regulators’ ability to analyze the results of policy consultations and formulate responses.

“We get a ton of comment letters in, so one experiment we’re doing now is: can we do summarization and sentiment analysis on comment letters, and can we do it from the perspective of a regulator?” Brad Ahrens, FINRA’s senior vice-president, advanced analytics, said at the conference.

“If we ask an LLM to give us a sentiment analysis from a grumpy customer, it’s obviously going to give us one type of sentiment. But if we say, ‘Give us a sentiment analysis from the perspective of a regulator who’s writing rules,’ ideally that sentiment analysis may come back a little bit different,” Ahrens said.

Enriquez suggested several AIs could do this work: “You could have two LLMs talk to each other — one is the generator of the summary and one is the critic — and they’ll just actively talk to each other and refine the summary until they’re both satisfied.”

Finally, regulators may be able to deploy AI in their interactions with investors. For example, Gilbert said the Central Bank of the Philippines is using a chatbot to screen complaints from bank customers and direct them to the appropriate channel for resolution.

“I think that’s a great example of how regulators can interact with the public with generative AI,” Gilbert said.

However regulators use AI, there’s no shying away from the technology’s growing significance.

“It’s essential that we keep pace. Whether it’s the SEC, FINRA [or] any other regulator, the danger is that we fall behind,” Gilbert said. “We have to be on top of these developments so that we can fulfil our regulatory mission.”

This article appears in the June issue of Investment Executive. Subscribe to the print edition, read the digital edition or read the articles online.