TWM Articles from Objectway

The opportunities afforded by AI are many, but explainability may well be the key to uptake

Article by Objectway from the WealthTech 2024 Annual Report

Share this resource
company

Best in class Cloud-based solutions for wealth, investment and asset management

View Solution Provider Profile

Connect with Objectway

Objectway quick links
by Objectway
| 02/02/2024 10:30:00

Roger Portnoy, Chief Strategy Officer at Objectway, runs through use cases of AI and explains how explainability works

Banks, wealth, and asset managers alike are rapidly recognising AI as likely to become one of the most strategically essential technologies for the wealth management industry to leverage. Its potential to improve profitability and the client experience is irresistible. However, it appears that it is still difficult for many players to operationalise AI effectively, and therefore, monetising the benefits of AI remains a challenge.

The general willingness to invest in AI tools is high. We think that around 80% of banks have already invested in AI and even plan to increase their spending on it over the next two years.

One key reason for this willingness to invest is the opportunity to drive operational efficiencies, improved client outcomes and cost savings, all of which can be realised as a result of the effective use of AI. We think that around 70% of decision makers for data and analytics technology are already using AI to improve the efficiency of IT and business operations.

Middle and back office
The obvious areas to achieve such efficiencies are within the middle and back office. Robotic process automation and intelligent process design in back-office systems have already made for significant efficiency gains. Rapid improvements in the scope and accuracy of computer vision (advanced OCR), as well as NLP techniques to properly label, classify, and assimilate into business processes more complex alphanumeric data sets within unstructured document types, are now opening up further opportunities toward hyper-automation too. This sits well with the need for change. In particular, help is needed when middle- and back-office staff face challenges to maintain data integrity and consistency within a key business process. They are often required to escalate issues relating to exception handling.

Specifically, the suitability process, and supporting accurate and compliant product governance, are increasingly important aspects of the way that clients are able to receive and maintain personalised portfolios from the front office and, thereby, secure access to the best investment recommendations from their relationship managers. In these situations, the ideal scenario for the middle-and-back office operations is to ingest, reconcile and process additional information. However, since this information is often embedded into other documents, consistent and scalable processing capability has not been available. By deploying computer vision and NLP to read and extract available data, the opportunity to greatly improve the alignment of suitability with product governance presents itself, driving both compliance as well as business gains. This same combination of technologies, it turns out, can also be trained, and deployed to facilitate much more interoperability between systems, particularly when there is the need, within a business process, for both data translation and data normalisation to a specific target protocol. This means extending data management capabilities into more and more unstructured data scenarios without having to force different actors into using a specific and often limiting data template.

“As interest in AI has exploded, developing robust and auditable methods to address explainability has emerged as a concern. Wealth managers need to be able to explain and document how an AI algorithm works for both regulators and clients alike.” Roger Portnoy Chief Strategy Officer Objectway

Enhance the adviser - use of Generative AI
To this end, AI can also be used to harness unstructured data in a front-office data management scenario to support the adviser in maintaining the personal relationship with the support of digital technologies. This will become more important as younger, digitally-savvy investors come into intergenerational wealth or create it themselves.

Indeed, very few wealth managers have had the discipline in their overall lifecycle management process to turn disparate bits of data from an informational repository to a reusable knowledge asset. As a result, while the landscape is littered with many snippets of insight and value, cohesion is lacking when trying to build a holistic picture that can provide both perspective and predictive capabilities.

However, digital technologies, above all Generative AI and its ability to make sense of disparate data sets, can strengthen the role of the adviser and offer support in decision making processes, in the innovation of the offer, and in the personalisation of the relationship.

Better portfolio management through the use of AI
AI also affords an opportunity to maximise portfolio returns and, therefore, increase customer satisfaction. Indeed, many AI models are currently being researched to help construct better portfolios, particularly regarding factors and themes. New quantitative models for alpha generation also play a role. The aim is to achieve better results than the market and to process unstructured data.

AI explainability is key
However, as interest and investigation into the many new use cases for AI has exploded, the underlying need for explainability has emerged as a concern. Wealth managers need to be able to explain and document how an investment decision or recommendation was reached, to maintain compliance with both regulations and client expectations.

Indeed, although AI is becoming increasingly popular, many companies are still grappling with challenges related to data privacy, embedded bias, and the transparency afforded through AI models. Wealth managers also view the risk of AI tools providing inaccurate and tainted investment advice to their customers as too great. In response, they need to invest in explainable AI systems that can continually demonstrate that the recommendations provided by AI are always in the best interest of their client.

Explainability is not ultimately geared toward being right or wrong; it is not about determining the outcome, but about the rationale for reaching it. This is why complexity is the enemy of explainability and why deep-learning models are far more challenging to diagnose than other types purely based on measurable statistical calculations.

There are two ways to look at this. Firstly, explainability is linked to process, being able to unpack and audit the process and how it was designed to get the desired outcomes.

Any framework must also include rollback procedures so human supervisors can clearly separate and look at different decision points. They should also be able to see the impact of change on any of the decision points and any data-led biases emerging at any stage of the overall process.

The rapid pace of development suggests that software solutions designed for articulating, setting up, and managing frameworks will likely integrate into AI operating systems or become integral to enterprise risk management in the near future.

Interested in reading the full report? You can read this edition of the WealthTech 2024 Annual Report online here.