blog from Aveni

Why 72% of UK clients now demand ethical AI from their financial advisers

Summary: why values-first AI matters in financial advice

Share this resource
company
by Aveni
| 27/07/2025 15:00:00

  • UK clients increasingly expect ethical, transparent use of AI in financial advice. 72% say it impacts their trust.

  • Generic AI tools risk unsuitable advice, regulatory non-compliance, and eroded client relationships.

  • Ethical, purpose-built AI must support treating customers fairly, explain decisions, and reflect client-specific needs.

  • Values-first AI improves client trust, transparency, and outcomes while reducing regulatory and reputational risk.

  • FinLLM by Aveni is a domain-specific LLM for UK financial services, built to deliver compliance-ready, client-first AI advice.

The hidden risks of using generic AI in financial advice

Your clients are asking harder questions. They want to understand how you make decisions, not just what you recommend. Increasingly, they are choosing advisers based on trust, transparency, and ethical standards, not just performance.

A recent survey shows that 72% of UK clients trust financial providers more when they demonstrate responsible AI use. But most AI tools were not built for the ethical and regulatory demands of financial advice.

At Aveni, we call the alternative values-first AI: technology purpose-built for regulated financial advice, where treating customers fairly, transparency, and client-centricity are built in from the start, not bolted on afterward.

This post explains what makes values-first AI different and why it is rapidly becoming essential.

Why generic AI fails to meet client expectations

Most AI tools in financial services were not designed to meet the ethical and regulatory standards that define professional advice. They often prioritise efficiency and automation over clarity, fairness, and client understanding.

That misalignment creates a credibility gap. Advice generated by generic AI can be:

  • Difficult to explain
  • Impossible to audit
  • Vulnerable to bias
  • Unsuitable for real client needs

These gaps can lead to loss of trust and increased regulatory scrutiny.

The ethical and regulatory risks of generic AI

  • Impersonal advice: clients notice when recommendations are vague or generic. It undermines their confidence in your expertise.

  • Regulatory exposure: the FCA expects advisers to provide clear, auditable justifications for every recommendation. Generic tools often lack this transparency.

  • Liability from poor fit: if AI fails to reflect a client’s unique context, or introduces unintended bias, you risk delivering unsuitable or unfair advice.

  • Reputational damage: the use of opaque, generic AI can erode long-term client trust, impacting referrals and retention.

What clients and regulators expect from AI in advice

To protect trust and maintain compliance, firms need AI that:

  • Understands the specific demands of regulated advice
  • Embeds treating customers fairly (TCF) principles
  • Provides explainable and auditable recommendations
  • Keeps humans in the loop. Clients and regulators expect AI to enhance, not replace, professional judgment. Ethical AI supports advisers rather than making autonomous decisions, ensuring human oversight remains central

We call this values-first AI: purpose-built technology that aligns with the ethics, standards, and legal responsibilities of the financial advice profession.

What ethical AI looks like in practice

Values-first AI means more than just adding compliance filters to generic tools. It is about designing AI systems with ethics and regulation at their core.

Here is what that looks like in practice:

  • Fairness by design: built to uphold treating customers fairly (TCF) principles and eliminate bias.

  • Explainability built in: every recommendation includes rationale that can be clearly communicated to clients and regulators.

  • Client-centric logic: models are trained to reflect individual client needs, preferences, and risk tolerance.

  • Regulatory awareness: systems that proactively support compliance with FCA Consumer Duty requirements.

This is the approach behind Aveni Labs’ FinLLM, the UK’s first language model tailored specifically for financial services.

Why the market is ready for ethical AI

Financial advice is at a turning point. The next generation of clients is already setting new expectations for how advice should be delivered.

Aveni’s recent survey with YouGov reveals:

  • 83% of 25–34-year-olds would feel comfortable using AI-assisted financial advice if it reduced costs
  • 86% of this group say accessibility is one of the most important benefits AI can bring to the advice experience
  • In contrast, only 33% of over-55s report the same comfort with AI, highlighting the need for a blended, human-first approach for older clients
  • Despite this divide, 93% of people who currently use financial advice trust their adviser’s ability to understand their needs. This is a strong foundation that AI can help enhance
  • Among non-users, 62% feel uncertain or indifferent about financial advice, often due to concerns about hidden fees or adviser motivations

This creates a dual opportunity for advice firms.

First, to engage younger, digital-first clients with affordable, AI-assisted tools that are built for trust, transparency, and personalisation. Second, to strengthen traditional advice models for older clients by using compliant, human-in-the-loop AI that supports advisers rather than replacing them.

With £7 trillion in generational wealth expected to change hands by 2050, the firms that adopt ethical, values-first AI early will be in the strongest position to capture the opportunity and lead the market.

How ethical AI strengthens trust, compliance and client experience

When firms use AI built specifically for financial advice, the results are immediate and meaningful for clients, advisers, and regulators alike.

  • Personalised, understandable advice: clear, tailored recommendations in plain language clients can relate to.

  • Built-in transparency: every decision includes auditable rationale for clients and regulators.

  • Stronger compliance: automated documentation supports Consumer Duty and reduces adviser workload.

  • Deeper client engagement: clients feel more confident when they understand the advice process.

  • Fair, unbiased outcomes: designed to reduce risk of discrimination and ensure consistent treatment.

  • Trust that drives loyalty: ethical AI reinforces credibility, improving retention and referrals.

By combining ethical design with regulatory awareness, values-first AI like FinLLM helps advice firms deliver better outcomes and build lasting trust in a rapidly changing market.

The trust advantage in today’s market

In a competitive market, trust is a differentiator, and it is shifting fast:

  • 72% of clients say they trust providers more when AI is used responsibly
  • Ethical AI builds credibility, loyalty, and word-of-mouth referrals
  • According to Professional Adviser, younger generations are increasingly turning to social media “finfluencers” for guidance, signalling a growing trust gap between traditional advice and digital-native expectations
  • Research from Edelman’s Trust Barometer also shows that transparency and ethical use of technology are now core to how consumers evaluate professional credibility
  • Transparent technology boosts your professional standing and client retention

The choice every advice firm must make

You can continue using generic AI tools and hope they do not create compliance or client relationship issues. But the data is clear: that path comes with real risk. In fact, a 2024 study from Entrepreneur found that 80% of financial advice generated by ChatGPT was inaccurate or misleading, highlighting the risks of relying on generic AI tools in regulated environments.

Or you can adopt purpose-built, values-first technology that reflects your professional standards.

At Aveni, we have developed FinLLM, a domain-specific large language model purpose-built for UK financial services. Trained on UK financial data and regulatory materials, it delivers enhanced compliance, safety, and accuracy. FinLLM outperforms general-purpose language models on key financial tasks, including understanding and reasoning over tables, summarising financial documents, classifying financial information, and answering finance-specific questions, making it the foundation for responsible, enterprise-grade AI agents in financial services.

Because in financial advice, how you make decisions is just as important as what you decide.

Technology built for regulated advice

At Aveni Labs, we build AI specifically for the financial advice sector where trust, transparency, and regulation are non-negotiable.

FinLLM is fine-tuned on real financial data, aligned to UK regulatory frameworks, and trained to deliver fair, explainable, and client-centric recommendations.

Read the original article here.