AI-powered search tools struggle with pension content but most trust answers

AI-powered search tools are struggling with pension scheme content and are regularly providing members with incorrect information, and around 70 per cent of them are accepting the answer.

According to Quietroom, 92 per cent of UK users rely on Google for online searches and AI-generated overviews now appear in around half of all results. Around 70 per cent accept these summaries at face value without checking sources or visiting the underlying webpages.

Quietroom tested OpenAI’s Operator agent on UK pension scheme websites and found it failed to read information hidden in expandable accordions. It gave the wrong answer about when a deferred member could retire. Meanwhile, ChatGPT and Google also struggled with basic scheme questions that were answered on the site, generating responses based on other schemes with different rules.

Quietroom found that AI sometimes makes mistakes in complex calculations and directs members to third-party advisers and unreliable firms.

The tests also show that both humans and AI struggle with poorly structured content. AI cannot tell different member groups apart and oversimplifies technical language or may treat outdated information as current.

The growth of zero-click searches means members may never see scheme content as intended or at all.

The FCA has made clear that Consumer Duty applies to AI-driven interactions and schemes remain accountable when members make decisions based on incorrect information.

Quietroom director Simon Grover says: “Members are no longer reading what their scheme has written – they’re reading what AI tools serve up, which may or may not be accurate. And they’re asking AI to give them key points and what decision they should make.

“The solution isn’t to write for robots, but to write better for humans. Our research shows that AI does a much better job accurately summarising or explaining content if that content is already clear, consistent, well-structured and in short sentences.

“The more ‘gappy’, complex, and difficult it is to use your content, the more likely it is that generative AI will give your members incorrect answers,”

Exit mobile version