IBM Watson’s David Robson: How artificial intelligence will transform financial services

Delivering financial advice, improving greater personalisation of insurance underwriting, absorbing mountains of expert research – the march of artificial intelligence (AI) seems unstoppable. IBM director, Watson financial services Europe, David Robson tells Corporate Adviser that AI can bring better services and more competitive products to a wider range of people

Predictions of the robotification of our workforce are growing stronger by the day. While some commentary may sound more like science fiction than fact, PwC last week published a report predicting that up to 30 per cent of existing UK jobs are susceptible to automation from robotics and Artificial Intelligence (AI) by the early 2030s, even if it did sweeten the pill by saying new jobs will be gained elsewhere.

This trend became a reality for 34 staff at Japanese life insurer Fukoki Mutual Life earlier this year, replaced by a Watson-based AI solution for claims payment. With an increasing number of financial services organisations talking to the organisation about integrating AI into their business processes, including the likes of Swiss Re and RBS, Robson sees a range of commercial possibilities across financial services, from interacting with customers to crunching more data to give better underwriting.

Robson says: “Watson can go as far as we want it to. It is an intelligent agent. It can read documents, it can understand them and it can reason based on what it is reading. It can do most things provided the process it has to follow is written down, and someone can invest the time in making sure Watson understands it.
It is a cognitive computer platform.”

Robson points to the Watson Conversation service, which delivers chatbots to online services, and its Watson Retrieve And Rank functionality, which can scour large amounts of documentation for the relevant information, as two key components that can go towards creating a virtual adviser.

“Conversation and Retrieve And Rank” are the heart of a virtual advisor solution. They could be used in the fact find, which would be a scripted conversation, where Watson would be able to understand what people are thinking.”

While advisers will always argue machines will never be able to replicate their ability to scrutinise the non-verbal messages clients send them across the course of an interview, Watson claims it can make attitude to risk questioning more accurate by bringing in other data that shows how customers feel about experiencing financial losses. Its tone analyser picks up on the chatbot user’s emotions, based on the words that are used.

“We can change a conversation based on the responses, and we can tell whether messages are hitting home or not. Personality Insights gives you an insight into people’s personality, through the language that they use. If you can get access to somebody’s Twitter feed you can understand their attitude to risk better. You would still have to do a formal attitude to risk questionnaire, but this would be another input that would give you greater accuracy,” says Robson.

Robson sees Watson playing a role in delivering advice to corners of the market that are currently not commercially viable to be targeted, rather than replacing advisers altogether.

“I do not think Watson will replace private bank advisers, but it can offer them the opportunity to engage with new segments of the market that they were previously unable to engage with. One of the positive advantages is the certainty it gives you. From a regulatory perspective that gives you certainty as to what was said, and what advice was given, that you can’t get with humans. You have an absolute record of any conversation with Watson,” says Robson.

While still in its infancy, more projects are in the pipeline. RBS is about to go live with a service that will use Watson to triage customers to the right place within the organisation, says Robson, who sees life insurance as another key growth area.

“We are having lots of conversations with life insurers, the most intense at the moment to being with Swiss Re. For them we are reading the conditions, treatment plans, illnesses, drugs and applications, and understanding the criteria for underwriting a lot better. We are automating and reducing their existing documents. With a computer you can extend the scope of the information used by reading documents and understanding data that isn’t always taken into account,” he says.

Robson sees Watson being able to deliver underwriting benefits to insurers from big data held on individuals. “Yes, this will make pricing more in individual specific in the long run,” he adds.

Most of us are familiar with Apple’s Siri, Google Now, Microsoft’s Cortona or other phone-based intelligent assistants, which are complex enough to have an amusing short conversation with, but soon palm you off with ‘I found this on the internet’.

“Siri is aimed at a different part of the market, consumers not business. It has a small set of interactions. It can do between 10 and 12 things but once you go beyond that it will take you to Google,” says Robson.

So how is AI different? Historically computer programmes have given precise answers to strict mathematical questions, often with a decision-tree approach. Watson aims to replicate the human approach problem-solving of observing visible phenomena to spot trends, creating hypotheses about what these mean, evaluating which of the hypotheses are right or wrong and then choosing the right one. Humans aren’t completely redundant – obviously the results need sense-checking and the initial parameters have to be set by humans. Where Watson and other AI solutions bring more to the table is in the ability to consume massive amounts of information in non-conventional formats and shape the findings from this data into useful outputs.

Conventional computing can only handle neatly organised structured data, usually held on a data base. Today, 80 per cent of data is unstructured, including literature, articles, research reports, blogs and tweets. Watson uses natural language, governed by rules of grammar, to interpret all this data. It doesn’t just look for keywords, but looks at the grammatic and structural composition of sentences to draw meaning from the way the words are put together. It also understands context and tries to understand the intent of the author through linguistic models and algorithms. It learns the jargon of a subject area, analyses documents and calculates a percentage likelihood that its assessment of what each document means is correct. It can use this to process large volumes of data ‘intelligently’, to create hypotheses.

Medicine is a key area for Watson and other AI developers. A team at Stanford University recently published research showing AI beating doctors in identifying skin cancers, paving the way for diagnosis of melanomas by smartphone photo. Smartphone images have historically struggled to assist in this area because differences of focus, zoom, angle and lighting have rendered them unreliable for identification or classification purposes. The Stanford team’s AI data approach uses 1.4 million images to achieve 72.1 per cent accuracy, compared to scores of 65.6 and 66 per cent accuracy by two qualified dermatologists.

IBM meanwhile is using AI to crunch reams of medical papers to distill disparate knowledge in the field of cancer, to help narrow the focus for further study, research and development, helping praciticioners towards the best solutions.

When it comes to fintech, IBM has a deliberate policy of low entry prices for startups, to make it useful to them as a component in what they design, which should prove useful to those looking to maximise the pension dashboard’s functionality when it finally launches. “We want start-ups to use it as a component, which they are doing so,” says Robson. “We want to make it easy to integrate.”

The march of the robots may seem unstoppable, but we can take comfort in the thought that it will be a long time before they are smarter than us. If processing power continues to improve at its current rate of doubling every 18 months, it will be more than 20 years before computers are as powerful as human brains, says IBM. But then, as anyone processing claims in an insurance company will tell you, many jobs only involve using a tiny fraction of our intellect.

Exit mobile version