blog from London Stock Exchange Group

Why the global financial system needs high-quality data it can trust

By David Schwimmer, CEO, LSEG

Share this resource
by London Stock Exchange Group
| 11/03/2025 12:00:00

  • In artificial intelligence (AI), the value of data is not just in its volume but in its integrity and trustworthiness – poor data leads to unreliable results and AI risks such as hallucinations and bias.
  • Data transparency, security, and integrity—such as “watermarking” for financial data—are critical for compliance, customer confidence, and effective AI deployment.
  • Industry-wide coordination, standardised definitions of “data trust”, and interoperable regulations are essential to fostering reliable AI systems and scaling global financial innovation.

More than a century ago, reels of ticker tape were considered the cutting-edge of real-time data technology. Today, digital data serves as the lifeblood of the global and economic financial system. However, without pinpoint accuracy and trust in that data, we risk detrimental consequences for the whole economy.

As a global data and analytics provider, LSEG (London Stock Exchange Group) delivers around 300 billion data messages to customers across 190 markets daily, including 7.3 million price updates per second.

We are also seeing how AI is transforming finance. It is supercharging productivity internally and in our customers’ products, enhancing financial workflows by boosting efficiency, enabling more informed decisions, and strengthening the customer experience.

As the financial services sector continues to explore the possibilities of AI, there is an enormous appetite for data. This continues to grow: customer demand for our data has risen by around 40% per year since 2019.

But without the right data, even the best algorithms can deliver mediocre, or worse, misinformed results. Poor quality data increases the risk of AI hallucinations, model drift and unintended bias. The growing complexity of contracts and rights management in this field creates inherent challenges in avoiding licensing or contractual breaches.

Building on data integrity and digital rights
There are great new opportunities for processing large unstructured datasets through generative artificial intelligence (GenAI) models, but their worth is limited without trustworthy and licensed data. Data in GenAI is not just a quantity game; it is a quality game.

Many businesses are critically considering how to embrace AI opportunities with high-quality data. At LSEG, we have developed a multi-layered strategy that may help guide others in the financial services industry.

The first layer is ensuring data integrity and relevance, which are critical requirements in large language models (LLMs). “GPT-ready” datasets – curated and validated by trusted data providers – are in high demand, and we expect that demand will grow as more businesses explore GenAI’s uses.

High-integrity data acts as a security net when working with LLMs and other AI applications.

The second layer is digital rights management. Customers expect solutions that verify which sources can or cannot be used in LLMs, govern responsible AI policies, protect against IP infringement and differentiate usage rights.

Trust and transparency in financial data
These layers are underpinned by “data trust,” an approach to data that is built on the foundation of information transparency, security, and integrity.

When data leads to big decisions, customers need peace of mind to track where data is coming from and ensure that it is secure, reliable and able to meet regulatory and compliance standards. Put simply, it is “watermarking” for financial data.

All financial services companies must raise the bar on the calibre of their data.

To increase trust in data across the industry, we need greater standardisation, coordination, and a stable regulatory environment, underpinned by clear principles on AI’s responsible and ethical use.

The more standardised the industry definition of data trust, the easier it will be to ensure the flow of high-quality data. If the core principles of transparency, security and integrity of information are applied to the standard of data, we will be able to foster real-time, pinpoint accuracy across the sector.

Laying the ethical groundwork for innovation
The industry should aim for the highest level of transparency so that customers can see what a dataset contains, who owns it, and how it is licensed for use.

Regulations such as the European Union’s AI Act and the Digital Operational Resilience Act introduce safeguards, clear accountability and a focus on governance and preparedness in financial services.

Voluntary guidance, including the National Institute of Standards and Technology’s AI Risk Management Framework in the United States, can also help organisations measure and manage risks to AI systems and data.

It is clear these regulations serve as good starting points for how the financial sector should continue to develop safe and fair practices of AI. They have inspired our own Responsible AI Principles at LSEG.

Moving forward, policymakers must recognise the need for high-quality data as we develop the AI-enabled tools of the future.

We support the use of internationally agreed-upon definitions relevant to AI and data. We also need more rigorous parameters for managing intellectual property and digital rights.

The path to global AI regulation
At the same time, regulatory requirements for technology must be more interoperable. The more specific the rules, the more difficult it is for global companies to scale up quickly.

When companies need to make business decisions in different jurisdictions, this can impact everything from the location of a data centre to the choice of a cloud provider.

As AI technology develops, policymakers should ensure legislation is flexible enough to align with other jurisdictions while remaining relevant for upcoming AI use cases.

None of this will be easy, but businesses in the financial and tech sectors, regulators, and consumers can all contribute to this conversation. We will need varying expertise and understanding as we embrace the technology that will alter our lives.

For AI to meet its potential in addressing the world’s biggest challenges, we must be able to trust the data that is going into it.

Read the original article here.