Financial services companies are often found way down the list of industries looking to add tech to their systems. But right now, the industry is on the cusp of a major transformation. AI (artificial intelligence) in global banking is anticipated to grow from US$3.9 billion in 2020 to US$64 billion by 2030. The use of generative AI for financial services is picking up pace and increasingly driving innovations in operations, customer service and decision-making.
According to KPMG, 36% of financial services leaders now use generative AI at least once a day at work, and 60% use it weekly at least. Yet, there is still so much potential for growth – areas like financial planning, customer data analytics, and fraud detection could benefit the most in the coming years.
In this article, we address the five most pressing questions that leaders and decision-makers need to think about when adopting generative AI – covering the ethical, practical and strategic aspects.
-
What are the main ethical implications to consider when implementing a generative AI solution?
Generative AI in financial services offers great potential, but, as Uncle Ben in Spider-Man said, “with great power comes great responsibility.” To benefit responsibly, we need to prioritise ethics from day one. This means getting everyone on board to ensure models align with human values and regulations.The key areas to focus on include:
-
Data privacy and security: There is a lot of sensitive data in financial services, so strong protection measures are a must. One approach is using “ethics sheets” to identify potential risks before collecting data. You should also be transparent with communities whose data you are using, ensuring you have their consent.
-
Bias and fairness: If the data feeding into your AI is flawed, so are the outputs. Biased AI can affect financial decisions and customer interactions. To avoid this, using clean, diverse data and regularly testing your models for bias is essential.
-
Transparency and accountability: In heavily regulated industries, AI-driven decisions need to be explainable. A black-box model will not cut it if regulators or customers ask questions. Make sure your AI-based decisions can be traced and explained in simple terms.
-
Environmental impact: Training large language models consumes a lot of energy. As more financial services adopt generative AI, sustainability matters. You can reduce your carbon footprint by using energy-efficient models and opting for cloud services powered by renewable energy.
-
-
How do I ensure my provider has developed their solutions responsibly?
We have said it before, but it is essential to make sure your provider is using responsible AI – fair, safe, and respectful of privacy. Responsible AI is what builds trust with customers and stakeholders and you want to ensure that it benefits everyone without leading to unintended issues. Being transparent about how the AI is developed and used is key to raising confidence and avoiding future problems.One way to ensure AI is responsibly developed is to confirm that your provider follows industry governance standards. Financial services are heavily regulated, so any AI solution you adopt needs to comply with the rules. For example, the FCA in the UK offers guidance on AI use in finance and a 2023 policy paper encourages a flexible framework for regulating AI. The EU AI Act 2024 sets clear regulations for AI development in Europe, and the US is increasingly introducing state-level regulations. It is also worth checking that your provider complies with data management policies, ensuring it legally and ethically sources its data.
Ask for transparency reports from your provider. These should explain how the AI model was trained, tested, and validated and whether they have taken steps to reduce bias. If they cannot provide this information, that is a red flag.
Lastly, continuous monitoring is crucial. Regular assessments are needed to make sure they are performing as expected and not introducing new biases or inaccuracies. It is all about ensuring AI remains ethical and effective over time.
-
What impact will generative AI have on the financial services workforce?
The reality is that generative AI will change the way we work, but it is not all about replacing jobs. In fact, the real challenge is how companies and their employees adapt to it.
According to a study by IBM, 65% of executives believe that success with AI depends more on adoption rather than the tech itself. And that makes sense. Generative AI can automate repetitive tasks such as generating reports or processing data, freeing up employees to focus on more strategic, meaningful work. While some roles may be replaced, new positions may emerge in AI management, oversight, and ethics – especially as regulations around AI increase.
Of course, there will be some job displacement. Tasks that once took a full team to manage can now be done by a single AI system, making some roles obsolete. But AI’s integration is key to enhancing competitiveness. In the IBM survey, 59% of CEOs said that maintaining a competitive advantage will rely on how advanced their generative AI solutions are.
As AI takes over repetitive tasks, employees will need new skills to stay relevant. 35% of CEOs say their workforce will need retraining over the next three years, a sharp increase from 6% in 2021. Upskilling is becoming more urgent, notably as 53% of CEOs are already saying they are having trouble filling key tech roles.
It is not only about the tools – workplace culture needs a shift too. Employees need to understand how AI fits into the company’s broader strategy and what it means for them personally. Currently, it is suggested that approximately 40% of employees feel disconnected from how their company’s strategic decisions impact them, highlighting a gap that needs closing. -
How do you improve the accuracy and truthfulness of Large Language Models (LLMs)?
As we see generative AI for financial services becoming more common in financial services trust is still a major concern. No one wants to rely on an AI model that gets things wrong or spreads misinformation, especially when it comes to sensitive financial decisions. So, how do we make these models more accurate and trustworthy?
High quality data: LLMs are trained on massive amounts of data, but not all of it is relevant or reliable. The phrase “garbage in, garbage out” applies here. If the data used to train the model is inaccurate or biased, its output will be too. Feeding LLMs diverse, high-quality data and incorporating domain-specific knowledge like financial regulations can improve their reliability.
Keep humans involved: No matter how advanced AI gets, human oversight is essential. People can spot errors, inconsistencies or unusual patterns that the AI might miss. This helps improve accuracy and ensures the model stays aligned with ethical guidelines and regulatory standards.
Post-training updates: After model deployment, do not just forget about it. It still needs continuous improvement based on real-world feedback. Financial markets and regulations change frequently, so keeping the model updated with fresh data ensures it remains accurate and relevant over time. -
What are the most promising applications of generative AI for financial services?
Fraud detection: Generative AI can sift through huge volumes of real-time transaction data, identifying suspicious patterns that would be hard for a human to catch. For example, platforms like Featurespace use AI to prevent fraud and financial crime in real-time. This not only helps prevent fraud but also streamlines compliance processes. Solutions like Aveni Detect also help by automating workflows across businesses to ensure smoother compliance checks and assessments.Customer service and virtual assistants: AI-driven chatbots are becoming more sophisticated, offering support to clients but tend to currently be limited to common queries and standard tasks. Going forward, the key will be personalising conversational interaction, as well as initiating actions with customers to improve the overall banking experience.
Personalised financial guidance: Whether it is helping customers manage their investments or suggesting budgeting strategies, AI can tailor advice to individual needs, from data collection and analysis to risk assessment and personalising recommendations. It is transforming investment management and financial planning with insightful analytical and strategic tools. But while it may improve decision making, generative AI cannot fully replace the expertise of a professional adviser – yet.
Automated admin: AI tools like Aveni Assist are reducing the time spent on routine tasks like report generation, which not only improves efficiency but also reduces the chance of human error. This allows employees to focus on higher-level work rather than getting bogged down in paperwork.
Read the original article here.