How wealth managers can harness the strengths of GenAI and LLMs while mitigating some of its core risks
Despite the relative conservatism in the financial industry, artificial intelligence has been an element of modern financial firms’ tech stack for a couple of years. The wealth management sector is no exception – Morgan Stanley has collaborated with OpenAI since its early days, with the banks’ wealth advisors leveraging generative AI (GenAI) to navigate the knowledge base and streamline their customer meeting experience.
The vast potential of the technology remains untapped by most. According to Andrew Lo, a professor of finance at MIT Sloan and director of the MIT Laboratory for Financial Engineering, AI can provide financial advice reflecting the domain-specific knowledge that humans demonstrate in passing the CFA exam and obtaining other certifications, – provided a supplemental module incorporating finance-specific knowledge. Besides, given additional training, the models could adhere to the ethical and compliance standards required by regulators. The bias remained an issue.
However, wealth managers are far from handing over financial decision-making to GenAI. The large language models (LLMs) – models powering text-based GenAI applications – have well-documented risks – they produce inaccurate outputs or lose track of discussions, and in wealth management, even small mistakes can have significant financial and legal repercussions.
In this article, we explore how wealth managers can harness the strengths of GenAI while mitigating some of its core risks. We advocate synthesised applications containing the structured logic of financial models with the LLMs. That way financial firms can regain control over the outputs of their AI-powered services while leveraging the natural language processing technology.
The current LLM’s challenges in wealth management sector
One of the primary advantages of LLMs is their ability to process the users’ questions and generate responses based on unstructured data. However, this strength comes with substantial risks, particularly related to the accuracy of the generated outputs. First, LLMs “hallucinate” – produce convincing but incorrect outputs. Second, they experience difficulties in keeping discussions on track and remembering important details.
In the context of financial planning, these errors could manifest in misinterpreting financial data, forgetting the details of customers’ financial profiles or losing track of an active customer journey. These mistakes can harm customer experiences as well as the institutions’ brands. Due to these risks, wealth managers must maintain a relevant level of control over the use of LLMs.
The solution: The synthesis LLMs and traditional applications
The solution lies not in abandoning these tools, but in integrating them thoughtfully within a broader ecosystem of applications. This synthesis can address the limitations of LLMs while leveraging their strengths, ultimately creating a more robust and reliable financial planning process.
Application as the interface: Leveraging LLMs for chat experiences
One way to circumvent risks associated with LLMs is to introduce an application layer as the gateway between the user and the LLM. In this approach, LLMs can then serve as the engine behind chat-based interactions, but not be directly exposed to the end-user. The application acts as a mediator, interpreting the client’s requests and using the LLM to generate relevant responses. This setup ensures that clients can harness the LLMs’ ability to quickly process and respond to complex queries—while minimising the risks associated with direct interactions with them.
Structured memory: Maintaining context and accuracy
Another significant challenge with LLMs is their tendency to lose track of context over extended interactions. To mitigate this, the financial planning application layer should maintain a structured memory of important information, such as the client’s financial goals, risk tolerance, and previous interactions. By retaining a structured memory, the application can also verify the LLM’s output against known data points, reducing the likelihood of errors and ensuring that the advice provided is accurate.
Intent recognition and information retrieval
The application should interact with the LLM to decode the user’s intent, leveraging the model’s natural language processing capabilities to interpret complex queries. Once the application understands the user’s intent, it can take advantage of retrieval-augmented generation (RAG) techniques to search for relevant information across various sources, including PDFs, websites, and static or dynamic data repositories. This ensures that the LLM is not generating responses in a vacuum but is informed by up-to-date and contextually relevant information.
Example – Kate by Kidbrooke®
We created Kate by Kidbrooke®, a combination of our analytical platform, KidbrookeONE, and an LLM, to demonstrate how the introduction of an application layer and the integration of traditional structured financial models can effectively mitigate the downsides of LLM use in wealth management.
Kate by Kidbrooke® operates by using an application layer as a mediator between the user and the LLM, ensuring that the presented outputs are accurate, timely, and relevant. The platform maintains a structured memory of client data and integrates live external data to provide up-to-date financial advice. By employing an orchestration layer, Kate abstracts complex financial concepts and ensures that LLM outputs are generated with maximum probability of being accurate and compliant before they reach the client.
This not only enhances the reliability of the generated financial guidance but also demonstrates how financial planning tools can effectively leverage AI while maintaining the necessary controls to prevent errors and ensure client trust.
Balancing Innovation and Responsibility
While generative AI promises to enhance efficiency, personalise client interactions, and process vast amounts of unstructured data, its adoption must be approached with caution. The potential risks—inaccuracies, loss of context, and compliance issues—highlight the need for a thoughtful and controlled implementation strategy.
By synthesising LLMs with traditional financial models and robust application layers, wealth managers can harness the power of AI while maintaining accuracy and reliability that their clients expect. Kidbrooke’s Kate exemplifies this approach, demonstrating how structured data management, live data integration, and an orchestration layer can mitigate the downsides of LLM use in financial planning.
As attitudes towards generative AI move beyond disillusionment towards productivity, those who can effectively integrate these technologies while safeguarding their clients’ interests will lead the way in the next phase of the continuous evolution of financial services. The future of wealth management lies in this careful balance between innovation and responsibility, ensuring that AI serves as a tool to enhance, rather than replace, trusted relationships at the core of financial planning.
Read the original article here.