fake fact
iStockphoto/marchmeen a29

Financial advisors and their firms were targeted by AI-powered impersonation scams in 2025, posing risks to reputations, client trust and regulatory compliance. In some of these schemes, bad actors posed as trusted contacts to defraud targets.

“Gone are the days when you could discern a scam based on typos or poor language,” said Lester Chng, senior cybersecurity advisor at Toronto Metropolitan University.

“AI tools allow cybercriminals to craft convincing messages and research targets, making emails and social media content far more realistic,” he said. “Previously, generating videos, text and images to make an account look legitimate took significant effort. AI accelerates that process, allowing fraudsters to expand their reach and increase the likelihood of success.”

The Canadian Investment Regulatory Organization (CIRO) issued an investor alert in July after IG Wealth was impersonated on social media earlier this year.

“In January 2025, we became aware of fraudulent social media accounts impersonating our brand. Since that time, we have taken steps to report the issue to the applicable social media platform and to the appropriate regulators, including CIRO and the [Ontario Securities Commission],” IG Wealth told us in an email.

The firm alerted advisors, updated its fraud prevention webpage, warned clients and posted social media alerts. “To date, we have no evidence that IG clients have been targeted or affected,” IG Wealth said.

David Rosenberg targeted

Regulators warn that impersonation scams can cause serious reputational damage even when clients aren’t directly affected. David Rosenberg, founder of Rosenberg Research, was alerted by friends that an AI likeness of him was being used to promote fake investment opportunities this year.

He first learned of the scam when a friend texted him about a fund he appeared to have endorsed. Soon, others reported similar messages.

“It started with one friend in March or April. Then others reached out, asking about this fund I was supposedly advertising. In some cases, they’d already filled out forms with personal details. That’s when I realized this was a serious problem,” he said. “These bots have perfected my speech, my tone, my facial features and they even use my firm’s logo. They’ll have a fake assistant ready to steal your money. It’s gotten completely out of control.”

CIRO warns that fraudsters are refining their tactics, often mimicking websites, emails and social media accounts of legitimate firms or claiming to be registered with CIRO.

“[B]e aware that people may try to present as your clients. CIRO has encountered cases where advisors did not comply with their dealer’s policies and procedures or with regulatory requirements to verify client identity,” said Alexandra Williams, senior vice-president of strategy, innovation and stakeholder protection.

She added that “advisors play an important role in protecting their clients and they should keep this top of mind, especially if receiving urgent or unusual requests for redemptions. These scams often come as urgent requests for redemptions and transfers to new bank accounts. For advisors, this should be a red flag. Some advisors have gotten into regulatory difficulty by not having fulfilled the requirement to verify the identity of the client.”

To address the threat, the Canadian Anti-Scam Coalition (CASC) was established in September. The cross-sector coalition includes organizations from financial, telecom, technology and law enforcement sectors.

Think before you click

Awareness and education remain essential for advisors to protect clients from AI-driven cyber fraud. Chng said financial advisors and firms can leverage tools to help safeguard clients, including email filters to flag suspicious messages and emerging technologies to detect AI-generated content such as deepfake videos and synthetic voices.

Chng says these tools should be part of a broader strategy that includes client education and careful verification of unusual requests.

“From an email perspective, there are tools that already exist to filter potentially dangerous emails — whether based on address or attachments — and those will continue to improve,” Chng said.

Rosenberg’s firm has developed internal guidelines to educate employees about spotting suspicious communications.

“The problem is systemic,” he said. “Social media companies profit from these ads being online. We’ve reported fraudulent ads multiple times, but they remain. Awareness is the most effective solution. People need to think before they click.”