Pensions advice ‘chatbots’ and guidance tools that utilise artificial intelligence to deliver more personalised support to members look set to transform the industry — but a proper regulatory framework is needed to build trust and confidence in this nascent technology. READ THE ROUND TABLE SUPPLEMENT PDF HERE
At a recent roundtable delegates discussed the regulatory, legal, and ethical risks associated with integrating AI into pension advisory services, highlighting the need for clear guidelines to address risks such as biased algorithms and data privacy concerns.
Regulation
AI presents a significant challenge for regulators, as well as for trustees overseeing these schemes, as providers begin to utilise this technology.
Consultants at the event, who tested a ‘chatbot’ prototype developed by Mercer, emphasised the importance of trustees implementing robust AI policies. The called for regulatory enforcement of these such policies to build trust among members, and ensure transparency and responsible AI implementation.
Shabna Islam, head of DC provider relations at Hymans Robertson, stated: “Trustee boards should have an AI policy in place, just like we do as an organisation. What are you trying to achieve? What are the boundaries? What are the risks and measures that you have?”
She said the regulator should intervene and compel boards to establish such policies before AI technology becomes widespread. This pre-emptive measure could help identify potential risks or problems early on.
“I would like the regulator to step in and enforce this. I think this will also help build trust with members. If they can see that master trust trustee boards are empowered to issue a policy that sets out how they’re going to approach AI, that will help.”
Mark Futcher, partner and head of DC and workplace wealth at Barnett Waddingham, agreed, stating that the regulator needed to “toughen up”. He noted that many people would follow automated guidance routes and reach a ‘good enough’ position. Regulation needs to be updated and strengthened to reflect these new realities, he said.
A key aspect of this is the distinction between advice and guidance. The Financial Conduct Authority is reviewing the current boundary between the two, and discussing third options, be it ‘simplified advice’ or more tailored guidance options. AI has the potential to blur these distinctions further, as chatbots can deliver information and guidance tailored to an individual’s exact circumstances.
Beth Brown, partner at Arc Pensions Law, discussed the distinction between advice and guidance, noting that factual information is not advice. At present, the prototype demonstrated by Mercer only provides factual information in response to questions. However, as AI tools develop and provide more tailored answers, this personalisation could create grey areas she said. Any AI tool used by a pension scheme must inform members that it is providing information or guidance, not advice, she said.
Brown argued that while AI tools may be useful, they cannot replace trustees due to their fiduciary duties. Many decisions require human discretion, not easily replaced by a machine, such as deciding what happens to a scheme member’s assets if they die without nominating a beneficiary.
“But for improving member experience, I think trustees will be open to utilising AI tools. But it’s about building trust in them. You need that buy-in for people to spend the time and resources using these tools. This will actually create the input which leads to getting a better output.”
Alex Waite, partner at LCP, highlighted particular risks with AI systems that scour the internet for answers to member questions. He warned about the dangers of an unregulated platform that functions like a search engine and said effective regulation is essential to prevent industry chaos, particularly as these tools evolve from providing information to more personalised guidance and ultimately offering advice.
There was also emphasis on developing a cyber risk policy, particularly concerning the handling of personal data.
Brown added: “We also need to develop the cyber risk policy because that’s the thing we’re not talking about. Once the personal data goes into this, then that’s a lot of personal information. I think it is something people will be concerned about.”
Post-Retirement Warnings
One of the key potential uses for AI is to help people make better decisions at retirement, particularly regarding sustainable withdrawal rates.
Delegates pointed out that many individuals currently risk depleting their future income by taking too much money too soon due to a lack of advice. However, they also noted that for some people, accessing funds early isn’t a choice but a necessity due to the cost of living crisis.
Futcher said AI tools could guide users in making more responsible financial decisions. These tools should warn users about the consequences of negative behaviours that could impact their future financial stability.
There was widespread support for this among the panel, many of whom said that while this stops short of advice, it could change consumer behaviour and lead to better outcomes.
Futcher said: “I think you can flash up warnings, which no one does. Providers don’t. Trustees don’t know. No one flashes up warnings. All it needs to say is this is not sustainable and that’s factual. You can give factual information.
“I’d rather see a change in the regulatory stance to say you’re allowed to intervene where you see harm occurring.”
James Hawkins, engagement manager at Isio, noted the vulnerability considerations of using AI tools. He said that financial literacy can be a vulnerability, particularly when individuals draw down funds too heavily and too quickly. He said nearly 50 per cent of people withdraw more than 8 per cent, meaning there is a serious risk they will outlive their pension fund.
He asked: “Is it not incumbent upon us to say, you need to be careful here? Because the whole point of a retirement fund is to have an income for life.”
Delegates also said they wanted trustees to give as much attention to at-retirement and post-retirement planning as they currently do for accumulation; some said this should be a mandated part of a trustee’s duties.
Sean McSweeney, employee benefits team director at Mattioli Woods, criticised the current system where trustees can seemingly avoid responsibility when individuals retire and are given limited choices.
“I’ve been saying for 25 years – mandate trustees to look at-retirement and post-retirement as much as they do accumulation. I cannot understand why trustees can walk away when somebody hits retirement age and give them a choice of two options and have no responsibility for it.”
Brown agreed that not much attention is given to post-retirement advice, despite pensioners still being members of the scheme.
However, McSweeney said there could be other applications for AI to assist with this issue. He asked whether there was an opportunity to use AI to shape default retirement options. He said he’d been disappointed with the slow progress with retirement pathways, despite ongoing discussions with major providers. He added: “It would be brilliant if an AI approach would give a big proportion of people, who it was appropriate for, advice. Is that going to happen? I hope so.”
Cost
AI tools could help reduce the cost of providing advice and guidance services. However, Engage Smarter founder Matt Gosden said there is likely a floor to how low these costs could go. He also highlighted cost savings through automation in financial services but emphasised that despite lower human costs, regulatory requirements remain crucial. This could mean that even AI-enabled advice services are beyond the reach of some pension members.
Gosden said: “How far down the cost spectrum could it come and then who can afford that? I can’t see it going down that long tail of people who don’t have that much money. So I still think the vast majority of people are not going to be able to afford advice and therefore there is a need for this sort of guidance solution to help them in the moment, to ensure they do not make a stupid decision when it comes to their retirement income.”
Meanwhile, Waite highlighted the significant challenge posed by potential redress costs in financial regulation. He cautioned about the uncertainties in the market and the potential liabilities involved in advising clients on financial decisions. Delegates expressed concerns about the potential for AI-driven engines to give misleading information — asking who would bear the cost of redress in such cases, and urging regulators to engage with this question.
Ultimately, Gosden said the reliability and accuracy of AI systems depend on having robust underlying models. Stephen Coates, head of proposition at Mercer, stated that their company’s prototype was rigorously tested.
Gosden emphasised the potential for significant improvement by integrating human oversight with AI technology. This is an area that is continuing to develop, requiring regulators, scheme sponsors, trustees, providers and advisers to be cognisant of potential risks but also the opportunities it offers to transform services and products for members.