The International Organization of Securities Commissions (IOSCO) is proposing new guidance for supervising the financial industry’s adoption of artificial intelligence (AI) and machine learning (ML) so as to prevent customer harm and mitigate possible risks.
“The use of these technologies may benefit firms and investors, such as by increasing execution speed and reducing the cost of investment services,” IOSCO said in a report. “However, it may also create or amplify risks, potentially undermining financial markets efficiency and causing harm to consumers and other market participants.”
Regulators are concerned about a variety of risks when it comes to industry firms using AI/ML, including data quality; ethical concerns; bias risks; outsourcing; and concerns with the development, testing and ongoing monitoring of algorithms.
For instance, IOSCO said ethical concerns “may arise when models develop certain social biases and potentially recommend undesirable outcomes” due to how data are handled.
“Market participants should be careful when developing [predictive AI tools] using large pools of alternative datasets, such as satellite data or twitter feeds, to seek to ensure that the developed models would not discriminate against a certain segment of the population and that the AI and ML driven decisions are fair and unbiased,” said IOSCO in its report.
The guidance aims to address these concerns in a series of recommendations, including that regulators ensure that firms’ use of these technologies is subject to proper governance, controls and oversight; that firms are doing adequate testing to prevent issues before launching new technology; and that they are providing disclosure to regulators, investors and others about their use of AI/ML.
The deadline for feedback on the proposals is October 26.