Quebec’s financial regulator the Autorité des marchés financiers has outlined best practices for the responsible use of artificial intelligence (AI).

The regulator published a paper on Monday as it seeks input from the financial sector on the opportunities and risks in deploying AI. Financial industry stakeholders in Quebec have until June 14 to submit their comments to the AMF, which said the discussion will inform future AI guidelines.

The paper focuses on six areas.

Consumer protection

Consumers should be protected from unfair, abusive or misleading practices by financial institutions, the AMF said. For example, firms should prevent AI systems from showing discriminatory biases, exploiting human behaviour and exposing investors to fraud risks like deepfakes. Ethical risks can be identified by including consumers as stakeholders during the design process.

If an AI system presents itself as human, such as chatbots, consumers should be clearly informed that it is an automated system. Companies should also obtain customers’ consent before making automated decisions with AI systems, the paper said, with options for those who do not want to be monitored by or interact with AI.

Transparency

Disclosures about the risks of AI and how they are mitigated should encourage consumers to read the information carefully rather than prompting them to accept it quickly. All AI-powered outcomes should be traceable and explainable to the customer and erroneous outcomes should be elaborated in appropriate detail in plain language, the AMF said. Customers should also be informed about remedies available to them if they are harmed by AI.

Justifying AI use

Each case of AI use should provide some benefit to the consumer and be easy to summarize. Firms should reconsider AI use if there are other options with equivalent outcomes that carry fewer risks, the AMF said.

Responsibility of firms

Firms should be responsible for the decisions, benefits and harms of their AI systems, even when using a third-party system, the paper said. Employees should be individually accountable for their actions when using AI systems, and humans should review all AI decisions that adversely affect consumers or have a high impact on their financial well-being.

AI governance

Financial firms should establish a governance structure to oversee the use of AI. It should include a code of ethics and sanctions for non-compliance, and allow the anonymous reporting of issues without fear of reprisals, the paper suggested.

Firms should be able to identify risks when the same dataset is used by two different AI systems. In addition, the work teams developing AI applications should be as diverse as the end users.

Risk mitigation

For institutions, deploying AI could lead to reputational risks and privacy breaches from the large volumes of data used, and AI systems could be manipulated by hackers. Companies should ensure the security of data and mitigate discriminatory biases in training data, the AMF said. After deployment, AI systems should be monitored and subject to regular audits.

The full paper is available here.