Thomas Joy: Is the EU adopting a smarter approach to AI?

Thomas Joy consultant at Quietroom asks whether new EU regulation will affect the way financial companies in the UK develop, deploy and use AI tools.

The European Union is about to regulate the way that companies use AI. This new piece of legislation, the EU Artificial Intelligence (AI) Act, introduces rules and restrictions on the way that companies develop, deploy and use AI tools.

The Act is due to come into force later this summer. It will then be implemented over the course of three years. If your company is based in the UK and you plan to use AI in the EU, then this Act will affect you. There is not yet any legislation for companies that only operate in the UK. 

Under the EU AI Act, AI tools are now grouped according to 4 levels of risk:

Unacceptable risk – for example using AI to identify people by scanning their faces, or manipulating their behaviour. These uses will
be banned.

High risk – for example assessing credit-worthiness, and pricing health and life insurance. These tools must be registered in an EU database and follow extensive risk assessments and processes to make sure they’re transparent, accurate and fair.

Limited risk – for example AI chatbots. These will need to make it clear that users are talking
to AI.

Minimal risk – for example free AI-enabled games, or spam filters. These uses will not
be regulated.  

The Act is good news. People now have protection from companies that use AI in ways that have significant implications on their lives, and companies will have a framework to follow that will help them use AI responsibly.

Among the 450-plus pages of the Act is a requirement that companies might take for granted – the requirement to talk about the way they’re using AI. For many, this will be the most difficult hurdle they have to overcome.

The EU AI Act says that companies should tell customers when high-risk systems are being used, what they’re being used for, and what decisions are being made.

It says people have the right to an explanation of the decisions that high-risk systems make about them, and that these explanations must be clear and meaningful. It also adds that people should be notified when they interact with limited-risk AI systems, like when they talk to AI chatbots.

Companies have been using AI for years, but this legislation will be the first time that they’ll have to explain some of what they’re doing to their customers.

If your customers are like most people, then they won’t know much about AI. They will, however, have preconceptions. They may believe that AI is risky, based on stories they’ve heard on the news about companies getting AI wrong. And they might feel that AI can’t be trusted, perhaps based on conversations at work with AI assistants like Microsoft Copilot, which warns users that ‘Copilot uses AI. Check for mistakes’.

These negative preconceptions and low levels of understanding mean that the bar is set high for AI communications.  For companies that fail to clear this bar, the consequences can be significant.

Just ask Adobe, who recently found themselves on the front page of the news for poorly communicating AI in their terms and conditions, or Microsoft, who are in hot water with users and security experts over a feature that takes screenshots of everything you do on your computer. 

These stories raise an important question about AI and trust. If people do not trust the tech giants who make the apps and devices they use every day, then why should they trust financial service companies, who they know much less about and interact with less frequently?

With research from the FCA showing that only 36 per cent of people agree that financial firms are honest and transparent in the way they treat them, the challenges of communicating AI are likely to be higher for financial services than in other industries.

Don’t take talking about AI for granted. For some companies, communications around AI will be a burden – yet another thing to tick off the ever-growing regulatory to-do list.

But others will see it as an opportunity to build trust. They’ll see this new requirement to talk about AI as a way to build stronger relationships with customers.

In a world where AI remains unproven, the words you choose matter. And they might decide whether customers choose to do business with you or not.

Exit mobile version