Artificial intelligence is a powerful tool with the potential to reshape many aspects of the pensions landscape, from advice and administration to the way people engage with their retirement savings. This is no longer tomorrow’s technology though, providers are starting to utilise AI algorithms today. READ THE ROUND TABLE PDF HERE
Consultants were able to test one of these prototypes at a recent Corporate Adviser roundtable, developed by Mercer and Engage Smarter.
Mercer Workplace Savings head of proposition Stephen Coates said: “AI will inevitably have a significant impact on our industry; we want to be at the forefront of these developments.”
Mercer has created what it is calling a ‘conversational AI agent’ functioning like a highly effective chatbot, facilitating meaningful conversations and two-way dialogue.
This tool answers members’ questions by filtering relevant information and data, producing an easy-to-read answer that should be intelligible to most people. In contrast a comparable Google search generates pages of sponsored and unsponsored links. Crucially, the AI version also prompts users with follow-up questions or encouraging users to consider potentially overlooked issues.
“We want people to use this information to make better financial decisions that will ultimately lead to better outcomes in retirement,” said Mercer’s head of engagement Tom Higham.
Currently, responses are not personalised, but Higham stresses this is just the first phase of its development. It plans to integrate relevant scheme data later this year, followed by individual data to create a rich, data-driven experience.
Mercer is in discussion with EV, which powers its modelling tools, to connect these to the AI system. The provider also has substantial financial information through its Destination Retirement advice and guidance tool.
Phased launch
In terms of timetable, Coates said this initial version should go live at the end of the year, giving generic but detailed pension information. The next phase will provide employer-specific information, expected the following year.
Mercer’s partnership with Moneyhub could allow it to integrate broader financial information, opening up significant opportunities for this technology, but there is no proposed launch date yet. Coates said there are compliance and legal considerations, necessitating a degree of caution. “It’s not necessarily because we can’t do it, it’s because we are trying to get these things right.”
Higham highlighted that this AI prototype is a ‘pension specialist’, unlike general large language modelling AI tools like ChatGPT. “There are some great chatbots out there but this one is solely focused on pensions. It leverages verified information about Mercer master trust and the broader pensions landscape.”
Mercer is working with Engage Smarter, which has been developing AI tools for various industry sectors. Senior leader Matt Gosden said focus is critical. “The potential for AI is huge, but it can be used in many different ways. Saying you are going to improve your business by using AI is about as meaningful as saying you will improve it by using computers. With AI you need to narrow down what you are doing, to have a chance of delivering something that works well.”
Human versus Machine
Gosden said the tool aims to deliver financial guidance more effectively than a human. There are several aspects to this: is the information accurate, and easy to understand, and will consumers interact with these services.
The success of these tools depends on trust as much as technology, although the two are interlinked: if people fear information is mis-leading or incomplete, then trust will be hard to establish.
Gosden said they have built guard rails into this prototype to improve accuracy. The AI tool is built around a curated content base, rather than pulling information from the internet.
This ‘closed’ system is also important for data protection and financial privacy.
Coates highlighted the limitations of open AI systems like ChatGPT. “It can grab information randomly and inaccurately off the web.”
Barnett Waddingham partner and head of DC Mark Futcher asked whether this AI tool knows that it does not know things? This he said is important in ensuring accuracy and building trust.
The demo showed the AI machine admitting it did not have the relevant information in certain circumstances. It avoided straying into advice when asked more personalised questions, such as ‘should I buy an annuity’, instead offering general information on annuities and drawdown, with a prompt to seek financial advice.
“It’s an engineered system that goes into a core content base,” said Gosden. “If it doesn’t know it can hallucinate an answer which sounds convincing but is wrong.”
As AI systems rapidly learn, a curated content base also helps with transparency and compliance. “If the rules and regulations change we can update this information.” Gosden said. “If it gives a wrong answer it is easier to go back and understand where this misinformation came from, why the mistake occurred.”
These errors are an important part of the development process. “We have ‘red teams’ set up around Mercer, individuals asking it questions to find potential flaws, or whether it is giving information that is suboptimal or not right. They spend time each week trying to break the system then flagging up potential problems.”
AI responses, even at this prototype stage, are giving highly accurate answers. “Humans make lots of mistakes when answering these sorts of factual pension questions,” Gosden said. “These are already more accurate than typical human responses we have reviewed, and we are still working to improve the machine.”
But there are differences in machine and human intelligence and in the mistakes they make. “AI machines get very different things wrong. It’s the sort of thing that a human would say ‘that’s a stupid error’,” he said.
For this reason many on the panel agreed there still was a need for human input alongside AI solutions, be it ‘sense-checking’ or signing off guidance given.
LCP partner Alex Waite noted some of his firm’s clients are already using AI tools to assist pension helpdesks. “A question comes into the helpdesk and people see the AI-generated answer. In most cases they know if it is right or not, and can essentially copy and paste it if it is correct. In their hearts they probably know that this will take their job at some point, but at the moment they are still adding value.”
The issue with errors causing misleading advice is something that worried those who have worked in the financial advice industry. Mattioli Woods employee benefits team director Sean McSweeney said the example of pensions mis-selling showed how redress claims could run to nine-figure sums, if problems were widespread, something that should give the industry pause for thought.
Trust issues for trustees
Trust is also an issue for trustees and employers within the workplace pensions market.
Arc Pensions Law partner Beth Brown said that trustees will want reassurance that output from these AI tools is accurate. “Trustees need to be confident these tools are based on correct information, and that personal data is used appropriately. It’s not a case of trustees reading every sentence generated by an AI chatbots, but due diligence needs to take place, as trustees ultimately have a fiduciary duty of care to members.”
Hymans Robertson head of DC provider relations Shabna Islam raised the risk issues with AI and said that trustees board will soon need to have a policy in place to cover this, setting out key principles for risk mitigation measures.
Brown said one key issue for trustees was ensuring the distinction between guidance and advice when using these AI tools. “As we move into more personalised guidance this distinction becomes somewhat greyer,” she added.
Isio manager, DC strategy and investment James Hawkins asked whether these AI tools might be more effective if deployed by employers, rather than
the pension provider. “Most scheme members don’t see their pension provider as a direct contact point. Most go to the employer first.”
Coates said that while there could be the opportunity for larger employers to white label these tools, he felt the onus remained on the provider. “People don’t stay in their jobs for long these days, but may be with a pension provider for 15 to 20 years. Many employers have delegated an awful lot of responsibility to the master trust, and now look to product providers to give their members this sort of information and guidance.”
The demonstration gave advisers the opportunity to see what AI tools offer at present, and discuss the future potential for this tech, particularly when it comes to helping members make better decisions around retirement.
It is clear there are some concerns, particularly around accuracy, and who would be responsible for potentially misleading answers. But as Coates points out the current guidance system is not serving the majority of pension members.
“If you look at the data you see just how many people are encashing their pension on first access and taking too much too early. I am surprised more people are not up in arms at that. AI has the potential to help people think about these decisions more clearly, potentially resulting in far better retirement outcomes.”