investor mobile device
iStock.com / ArtistGNDphotography

The C.D. Howe Institute’s new report tells a familiar story: Canadians have never been wealthier, yet their financial literacy has not kept pace. Household net worth has more than doubled since 1990, to around $415,000 per household on average. But fewer than a quarter of working-age Canadians can answer even half of a short set of basic financial questions correctly.

The authors are right about two things. The financial cockpit has become more turbulent as defined-benefit pensions have faded and responsibility for retirement security has shifted onto individuals. And advances in AI could make high-quality guidance far more accessible and affordable.

Where I part company is with the report’s conclusion that governments and regulators should “make financial education a bigger part of helping Canadians chart their course.” The core problem today is a power gap, not just a knowledge gap. We do not suffer from a shortage of information. We suffer from a shortage of tools — and rules — that allow ordinary consumers to turn information into good outcomes in a system designed and priced by others.

If we design AI mainly as an ever-smarter financial textbook, we will end up with slightly better-informed consumers facing exactly the same structural disadvantages. That is not progress.

There is an important distinction that gets glossed over: knowledge (knowing definitions and concepts) vs. empowerment (being able to use that knowledge, at the right moment, to choose, negotiate, switch or complain in a way that changes the outcome).

The C.D. Howe work leans heavily on quiz-style measures of literacy. The results are discouraging. But these tests tell us little about whether Canadians can, in practice, avoid being steered into high-fee products, detect poor advice or get redress when something goes wrong.

You do not need to derive the compound-interest formula to understand that a product charging 2% more in fees every year will likely leave you dramatically poorer in retirement. You do need a clear picture of your likely long-term outcomes with different options, a way to compare those outcomes reliably, and the ability and confidence to act.

A literacy-only lens also risks blaming consumers for failures that flow from product design, disclosure practices and incentives. When a mortgage is sold with dense legal language, complex penalties and an aggressive sales culture, the issue is not just that the borrower did not read page 27. It is that the system was never built for them to succeed without professional help in the first place.

Real empowerment

Empowered consumers experience at least four things: radical clarity about consequences (not “this is a TFSA” but “using this product instead of that one is likely to cost you about $80,000 over 25 years”); simple, fair default choices; low-friction ability to switch and say no; and effective recourse when things go wrong.

Financial literacy, in the narrow sense of answering test questions, is at best a supporting actor in this story. The starring role belongs to the institutional architecture that shapes choices and the tools people have at the moment of decision.

The C.D. Howe authors are cautiously optimistic about AI as a way to deliver more tailored financial education. They suggest large language model tools trained on the expertise of certified planners and made available to the public, potentially hosted by regulators or organizations such as FP Canada.

That is constructive. But when the report recommends that governments “make financial education a bigger part” of their response, it doubles down on the wrong priority. We have tried the education-first approach for two decades. The results speak for themselves.

If we design AI as a co-pilot rather than just a tutor, we ask tougher questions: When a consumer is about to lock into a mortgage, can an independent AI tool show them in plain language how this deal compares to alternatives? When someone is deciding between an RRSP and a TFSA, can an AI tool simulate their specific situation and present a simple recommendation range instead of generic rules of thumb? When a new product is launched, can regulators use AI to scan marketing materials and transactional data and flag patterns that suggest consumers are being missold?

That is AI for empowerment. It moves beyond explanation into protection, comparison and execution.

Canada is about to invest heavily in AI infrastructure. Budget 2025 sets aside close to a billion dollars over five years for sovereign AI compute. If consumer advocates and regulators are not at that table, these investments will flow almost entirely into productivity tools for business — not the same thing as rebalancing power in household finance.

The wrong goal

The report properly highlights risks: hallucinations, biased recommendations and the need for guardrails and human oversight. Those are real concerns. But the harder question is: guardrails around what objective?

If the goal is “financial education,” we will judge success by how many people use the chatbot and how many quiz questions they answer correctly afterwards. If the goal is empowerment, we must be blunt and outcome-focused: Do consumers using AI tools end up in lower-fee products? Do complaint volumes fall in product lines where independent AI tools are available? Do households using AI-assisted guidance reach retirement with more stable income and less exposure to obvious missteps?

These are measurable outcomes. They are also the outcomes that matter.

When the C.D. Howe report urges governments and regulators to “set clear leadership and guardrails for AI in personal finance,” it should be explicit about what those guardrails are protecting and enabling. Guardrails that produce smarter victims are not good enough. We need guardrails that produce stronger consumers.

Regulators and agencies such as the Financial Consumer Agency of Canada should articulate this empowerment goal explicitly. When they approve or promote AI tools, the bar should be that those tools demonstrably support better consumer outcomes, not just better scores on a literacy index.

The C.D. Howe metaphor is that Canadians have been “moved from passengers to pilots in their own financial airplanes, with many having little flight training.” That is vivid. But any pilot will tell you that training is only one part of safety. The design of the cockpit, the reliability of the instruments and the rules of the airspace matter just as much.

In household finance, we have spent decades focusing on training and comparatively little on redesigning the cockpit or modernizing the control tower.

AI now gives us a chance to do that redesign: to build public-interest tools that translate complexity into simple, personalized guidance at the moment of decision; to give regulators the analytical horsepower to detect harmful patterns in real time; and to give consumers practical ways to compare, switch and seek redress.

The C.D. Howe Institute is right that “governments and regulators should set clear leadership” for AI in personal finance. But that leadership must focus on empowerment, not education. If we treat AI as a shiny new channel for financial literacy campaigns, we will get what we have had for the past 20 years: modest improvements in knowledge and very little movement in outcomes.

If instead we set our sights on consumer empowerment, AI can help us give Canadians the tools to fly with confidence, not just the manuals that explain why the turbulence feels so rough.

Harvey Naglie is a consumer advocate and policy analyst focused on financial regulation.