Despite the recent proliferation of digital asset-allocation services (a.k.a. robo-advisors), they pose little threat to human financial advisors.

These services use artificial intelligence (AI) to choose investment portfolios for investors based on a series of answers to an online questionnaire. However, this approach can never replace what real advisors actually do. Here are some reasons why:

1. Knowing a client’s personal story
There is no questionnaire that will be able to capture a client’s personal story — and every person has a private and personal story. These stories provide advisors crucial information to advise clients in a manner that matches the clients’ personal needs. Regulators believe that questionnaires can capture one element — risk tolerance. However, a questionnaire cannot replace the work advisors do in assessing a client’s risk tolerance. That’s because a questionnaire cannot reveal the information advisors would obtain through the dialogue that’s triggered from the answers to the questions asked. If all the client does is answer true or false or tick off a box, the true risk tolerance revealed from the client’s personal story will not be captured, and the investment mix and investment choices will miss these important attributes.

2. Trust
If investors trust or rely on an application to pick a mutual fund to invest in, how do they know whether the code was written to prefer one mutual fund over another — regardless of quality and suitability? Recently, a speaker at a seminar stated that AI is more trustworthy than a human advisor. But guess what: this person works for an AI company. When you drill down into what AI an investor relies upon, the quality of the results will depend on the quality and independence of the information used to provide suitable recommendations. How could someone know the quality of the AI and whether it is unbiased?

3. Bias
People could assert that advisors can be biased when choosing investments for clients. However, it’s easier to examine an individual’s bias than the bias in a multitude of complex search engines and filters. What if a mutual fund company pays the search engine to filter searches in a manner that ensures a particular mutual fund shows up at the top of all searches? Regulators assert that advisors are in a conflict of interest when they choose investments for clients factoring in the fee they get from the purchase. That’s the reason that there’s much discussion about bringing in a “best interests standard.”

However, even without this standard in place, advisors must be transparent about any potential conflict of interest and either resolve it in the client’s favour or avoid it altogether. Advisors risk losing their licence if they don’t abide by these requirements. Which begs the question: What will happen to an AI company if bias is found in the code?

If clients are not sufficiently sophisticated to judge their advisors and understand the investments chosen for them, how will those clients understand the biases and conflicts in the filters of the search engines they use on the Internet? Furthermore, the backbone of investment regulations is based on the fundamental requirement of knowing your client. A computer program can only ask questions that cannot possibly fulfil all “know your client” obligations.

So, regulators will need to reconsider this requirement as there’s no questionnaire or computer program that will be able to get into a client’s psyche and help the client plan for his or her future. In addition, advisors know that their vulnerability lies in the potential to lose their otherwise clean reputations based on a single client complaint. What risks do robo-advisors face?