19/11/2025

Thinking fast and slow: improving how we understand and control the use of LLMs

Thinking fast and slow: improving how we understand and control the use of LLMs LLMs may sound like careful thinkers. But beneath the surface, there is just statistical pattern-matching. Financial services firms must stay sceptical, enforce human oversight, and design for safety. By Brandon Horwitz (with some help from ChatGPT)*, Chair of the IFoA’s AI Ethics, Governance and Risk Management Working Party. This is the fourth in a series of blogs by the working party.

In his seminal book ‘Thinking, Fast and Slow’, Nobel laureate Daniel Kahneman argued that human thinking runs on two tracks:

  • System 1: fast, intuitive, emotional
  • System 2: slow, deliberate, analytical

Kahneman was writing about human psychology, but his framework gives us a powerful way of thinking about how large language models (LLMs) – like ChatGPT or Claude – behave, and the ethical and governance challenges they create.

At first glance, LLMs look like System 2 in action. They produce coherent, reasoned prose, and can explain themselves when asked. But beneath the surface, there is frequently no reasoning – just statistical pattern-matching. That distinction matters enormously for financial services firms, the boards of these firms, and regulators deciding when (and how) to trust these tools.

 

Reasoning without reason

Kahneman warns that humans are prone to accept fluent stories as truth. That is exactly how LLMs work. They generate fluent, persuasive answers, can justify themselves when asked ‘why’, and even simulate internal deliberation. But there is no genuine chain of logic, no intent, no understanding.

It is System 1 dressed up in System 2 clothing. And that can be dangerous:

  • Over-reliance: Users may over-trust answers that only sound reasoned.
  • Bias reinforcement: Trained on flawed data, LLMs can produce biased outputs, cloaked in confident language.
  • False accountability: When a model ‘rationalises’ an error, it is hard to know where the mistake lies – or who is responsible.

 

Governance challenges

1. Transparency and explainability

Everyone should be interested in how AI tools reach their conclusions. But with LLMs, ‘explanations’ are often just post-hoc fabrications. Unlike traditional algorithms, there is no logic trail to follow.

The temptation is to confuse fluency with explainability. Firms need policies – and ‘human-in-the-loop’ controls – wherever AI could materially affect customers or staff.

2. Consumer protection and advice

In financial services, LLMs are already being tested in customer service, disclosure and even investment guidance. Left unchecked, their confident fluency could mislead.

Firms must frame these tools carefully, with clear signposting and disclaimers, to avoid creating unrealistic trust. Failing to do so risks regulatory breaches and poor customer outcomes.

3. Accountability and responsibility

System 2 is about agency and responsibility. LLMs do not have either. If an AI-generated recommendation goes wrong, responsibility still sits with the humans: the firm, the board, the individual.

Regulated firms cannot allow accountability to blur. Where LLMs are used in suitability, communications or other regulated functions, human ownership of outcomes must remain crystal clear.

 

Human bias meets machine bias

Kahneman’s work on cognitive bias also sheds light on how people interact with LLMs:

  • Anchoring: Users may cling to the first answer they see.
  • Confirmation bias: LLMs can reflect back existing assumptions.
  • Overconfidence: A tone of certainty can lull even cautious users.

Good risk management is not just about testing whether the model outputs look plausible. It is about testing how people use them. Behavioural testing matters just as much for staff as for customers – because both groups are influenced by the illusion of System 2 reasoning.


From psychology to policy

Thinking of LLMs as pseudo-System 2 helps us demystify them but also highlights the risk of anthropomorphism – assigning human qualities they do not have. Here are some useful principles to help shape policy and controls when using AI:

  • Recognise the mimicry: LLMs simulate reasoning, they do not reason. Build safeguards around that, including prompting LLMs to ‘show their workings’ and testing for consistency and reliable provenance of information presented as facts.
  • Enforce structured challenge: Make human oversight and sceptical review part of the process. AI tools are a great place to start, but output should not be exclusively relied upon.
  • Educate users: Use simple mental models – such as Kahneman’s Systems 1 and 2 thinking – to help colleagues think critically about what AI is and how they use it.
  • Design for safety: Create decision environments that make biases and limitations easier to spot. AI is just as good at arguing against something as it is at arguing for it. Ask for evidence that supports an opposing view or that looks at an issue from a different perspective.

 

Conclusion

Kahneman reminds us that human judgment is fallible – even when it feels careful and deliberate. The same is true of machines that imitate careful reasoning.

LLMs may sound like System 2, but they are really System 1 in disguise. For financial services firms, their boards and regulators, the challenge is not just technical. It is ethical and behavioural. The task is to slow down, stay sceptical, and keep humans accountable – so we do not mistake fluency for thought, mimicry for intelligence, or automation for responsibility.



*This article grew out of my reflections after reading Kahneman’s Thinking, Fast and Slow. I used ChatGPT to test and shape my ideas – often debating with it about its own limits. The words here were drafted with its help, but the framing and perspective are mine, shaped by my work advising boards and regulators in financial services.

Thanks also to my fellow members of the IFoA AI Ethics, Governance and Risk Management Working Party for kindly reviewing and helping to improve this article, demonstrating the value of combining human intelligence (HI) with artificial intelligence (AI)! 

  • Share on LinkedIn
  • Share on Facebook
  • Share on Twitter