Practice management tools are increasingly touting improvements with AI-powered functions, so advisors need to arm themselves with the right knowledge and internal controls to use them appropriately, according to a financial planning software expert.
“Client protection should outweigh convenience,” said Dave Faulkner, founder of financial planning software VibePlan and RazorPlan, at the Independent Financial Brokers of Canada’s annual virtual conference last month. “Saving 15 minutes on an email isn’t worth risking your licence over a privacy violation.”
From a regulatory perspective, AI is permitted, but there must be a human in the loop and suitable guardrails to remain compliant, he stressed.
AI chatbots and CIRO Rule 3600
Firms that use AI chatbots must set them up in a way that is in line with industry regulations.
Under the Canadian Investment Regulatory Organization’s (CIRO) Rule 3600, digital platforms for real-time or near real-time public communications, including customer-facing AI chatbots, are subject to the same accuracy, fairness and truthfulness standards as all other forms of communications.
For example, AI chatbots can provide general education and information to clients, like explaining how registered accounts work, but can’t give specific investment recommendations, Faulkner explained.
Firms can set up an enclosed AI environment where the chatbot can only answer client queries with pre-approved training documents, such as market commentaries and the firm’s own library.
After the conversation, these chatbots can flag risky terms such as “bitcoin” or “guaranteed return” for human review, so advisors can follow up with clients and offer further education, Faulkner said.
Black box trap and CSA Staff Notice 11-348
Similar to requirements for human advisors, the reasoning and data that led to an AI decision must be recorded so the output can be explained.
“Black box” systems, which aren’t transparent about their decision-making process, aren’t suitable and don’t establish trust, according to CSA Staff Notice 11-348.
“‘I didn’t know the AI suggested it,’ is not a defence,” Faulkner said. “You need to be able to describe an algorithm’s logic to a regulator, or you haven’t met your know-your-product obligation.”
Advisors must also disclose the breadth and depth of AI use to clients, so they can understand the material risks associated with that use, the staff notice said.
Think twice before uploading
Clients may be using AI to double check an advisor’s work by uploading account statements to publicly accessible algorithms like ChatGPT, which could put their personal information at risk.
“Your client uploaded their investment statement, how do you prove you weren’t the source of the breach?” he said, noting that public AI systems don’t promise confidentiality.
Therefore, advisors should have the AI safety conversation with clients to tell them not to upload personal information online, then document it in their notes to protect themselves, Faulkner said.
Another risk to consider is that criminals use AI to fish for advisors’ personal financial information.
For example, advisors may make a LinkedIn post about plans to attend an upcoming conference, and a bad actor can use AI to create a fake email to get the advisor to upload receipts and bank account information for reimbursement.
“AI is spearfishing, it’s intentional, personal and direct,” Faulkner said. “The next thing you know, your rent payment bounced because [the fraudster created] a one-off website to collect your personal banking information.”