The Treasury Select Committee has criticised regulators for their lax approach to artificial intelligence, potentially exposing the public and the financial system to ‘serious harm’.
These were the damning conclusions of the MPs report into this issue, which have been welcomed by many in the industry.
The TSC says the Financial Services Authority, the Bank of England, and the Treasury are not doing enough to manage the risks posed by the increased use of AI across the financial services sector.
It says this “wait-and-see’ approach” is insufficient, and regulators need to work more closely with the industry to ensure safe adoption of AI. It also called for the FCA to publish practical guidance on AI for firms by the end of this year.
It said: “This should include how consumer protection rules apply to their use of AI as well as setting out a clearer explanation of who in those organisations should be accountable for harm caused through AI.”
The TSC also raised the potential dangers of consumers getting mis-leading advice and financial information from AI platforms, such as ChatGPT.
The MPs compiled evidence from across the financial services industry and found that more than 75 per cent of UK firms are now using AI, with the largest take-up among insurers and international banks.
Across the sector it is being utilised in a variety of ways, including to automate administration functions and deliver core services, such as processing insurance claims and credit assessments.
The report does acknowledge that AI could bring considerable benefits to consumer if used correctly.
TCS chair Dame Meg Hillier, says: “Firms are understandably eager to try and gain an edge by embracing new technology, and that’s particularly true in our financial services sector which must compete on the global stage.
“The use of AI in the City has quickly become widespread and it is the responsibility of the Bank of England, the FCA and the Government to ensure the safety mechanisms within the system keeps pace.
“Based on the evidence I’ve seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident and that is worrying. I want to see our public financial institutions take a more proactive approach to protecting us against that risk.”
Hillier pointed out that the critical third parties regime was as established to give the FCA and the Bank of England new powers of investigation and enforcement over non-financial firms which provide critical services to the UK financial services sector, including AI and cloud providers.
The Government is responsible for deciding which firms are brought into this regime.
The report notes that, despite being set up for more than a year, no organisations have yet been designated under the regime.
The Committee urges the Government to designate AI and cloud providers deemed critical to the financial services sector in order to improve oversight and resilience.
Commenting on this report, TISA’s policy executive Phil Turnpenny says: “TISA welcomes the Treasury Committee’s recognition of the risks to consumers posed by AI, including the potential for unregulated financial advice from AI search engines such as ChatGPT to mislead and misinform consumers – a key concern highlighted in our own submission.
“AI can materially improve outcomes for consumers, but only if trust and safeguards keep pace. We support the Committee’s call for clear FCA guidance on how consumer protection and senior manager accountability apply to AI, alongside AI-specific stress testing and swift delivery of the Critical Third Parties framework.”


