Artificial intelligence
iStockphoto

As AI models are increasingly embraced by the financial industry, investors, and policymakers alike, new research from the U.S. Federal Reserve Board finds that these models tend to behave like humans when it comes to making simple economic decisions — but that their behaviour is easily altered too.

In a new staff working paper, Fed researchers examine how AI models make economic decisions.

“As [large language models] take on roles in financial advice, trading, and policy analysis, understanding their implicit objectives becomes as important as understanding their accuracy,” the paper said.

For instance, a model that is an effective forecaster, but has certain decision-making biases, “might make unexpected choices or choices that are suboptimal from the perspective of the [model’s] user,” it said.

To assess the models’ decision-making the researchers use classic game theory experiments — finding that they often behave much like humans do in similar experiments.

For example, in games where a model is asked to divide a given amount of money between itself and another party, the researchers find that, “most models offer close to an even split, even in situations where a purely self-interested agent would not share.”

The results are similar to the outcomes of human experiments — in fact, the models demonstrated “even stronger aversion to unequal outcomes than typical human data suggest,” they found.

However, the research also concluded that the “apparent fairness is fragile: when we mask the task’s economic context by reframing it, allocations shift toward self‑interest.”

For instance, the researchers found that, “presenting a decision as a currency exchange rather than a resource allocation … can shift behaviour in systematic ways.”

These kinds of shifts are more effective in simple, one-off allocation games, than they were in more complex tasks, the paper noted.

“In these dynamic environments, the models’ preferences appear less stable and more influenced by randomness or context-specific cues,” it said.

Ultimately, the paper finds that AI models “are not neutral computational tools but instead exhibit structured and quantifiable behavioural tendencies” — and, these tendencies are flexible, rather than fixed.

As a result, they conclude that, “There is a clear need for better diagnostic tools to identify and adjust the goals models implicitly pursue.”