Floating Button
Home Digitaledge Artificial Intelligence

Why data will make or break the promise of AI agents in financial services

Nurdianah Md Nur
Nurdianah Md Nur • 6 min read
Why data will make or break the promise of AI agents in financial services
AI agents may promise 20-fold productivity gains but without a strong data foundation, the risks outweigh the rewards. Photo: Shutterstock
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

At the Bank of Thailand, artificial intelligence (AI) agents are already at work. Its SQL Coding Copilot converts natural language requests into SQL queries, giving non-technical staff access to complex loan-level data.

Combined with metadata copilots and multi-agent reasoning, the AI agent allows employees to ask questions in plain language — such as whether lending patterns point to unusual activity or compliance gaps — and receive accurate answers. Embedding these AI agents into day-to-day supervision has strengthened the central bank’s ability to better spot money-laundering red flags earlier and safeguard overall financial stability.

More organisations are following suit. An International Data Corp survey shows that 86% of Southeast Asian companies plan to deploy AI agents within the next year. The appeal lies in how AI agents can collaborate autonomously to perform end-to-end tasks, with humans stepping in only for exceptions, oversight and coaching. McKinsey forecasts that each human practitioner may one day supervise at least 15 AI agents in the future, unlocking as much as a 20-fold productivity gain.

The Oversea-Chinese Banking Corporation (OCBC) is among the early adopters. By using AI agents, the bank has cut private banking onboarding from 10 days and 40 documents to less than an hour while improving standardisation.

Data hurdles slowing AI agents

See also: Bank of Singapore uses AI agents to cut source of wealth report time to one hour

Despite the promise, scaling AI agents is far from straightforward. The first challenge is data fragmentation. “Transaction records, customer files [and other information] often sit in separate systems spread across on-premises, cloud and regional platforms. Without seamless integration, AI agents struggle to generate timely and accurate insights,” says Remus Lim, senior vice president for Asia Pacific and Japan at Cloudera.

Metadata gaps, he adds, compound the problem. Metadata ensures each transaction is tagged with time stamps, currency details and other data such as compliance markers. Since AI agents rely on these contextual details to interpret activity, incomplete metadata leads to inconsistent results, false positives and missed anomalies. In the case of anti-money-laundering, both data fragmentation and metadata gaps create ambiguities that criminals can exploit while leaving audit trails incomplete and monitoring less effective.

Data sovereignty adds further complexity. A recent study by Pure Storage and the University of Technology Sydney reveals that the majority of executives cited geopolitics as an accelerant. They also warned of reputational damage if data sovereignty planning lags, with losing customer trust as the ultimate consequence of inaction.

See also: The babble about a looming AI bubble

Matthew Oostveen, Pure Storage’s chief technology officer and vice president for Asia Pacific and Japan, defines data sovereignty as more than just where information sits. “Data is subject not only to the laws of the country in which it is collected or stored but also to the authority of [foreign governments] that can exert legislative reach over companies [including cloud service providers] incorporated in their jurisdictions,” he says, adding that the dual exposure leaves customers at risk even when their data never leaves national borders.

The challenge is especially acute for multinational banks, notes Gavin Day, chief operating officer of SAS. Large institutions often run separate data strategies in each country and even in each division, resulting in a patchwork of vendors and technologies that do not interoperate. Sovereignty rules then prevent data from being consolidated across borders, further limiting the effectiveness of AI agents.

As AI agents take on more autonomous decision-making, transparency into how they arrive at outcomes is also key. One solution, says Day, is the use of “model cards”, which document what data was used and why, so that business users and regulators (apart from data scientists) can understand the AI model’s performance, fairness and transparency.

Cloudera’s Lim agrees that regulators are demanding more. “Banks must navigate a patchwork of privacy and AI-specific regulations that differ by market and are constantly evolving. Regulators demand transparency in how AI agents make decisions, strict adherence to data sovereignty, and clear audit trails, which are requirements that many legacy systems were not designed for. In addition, these systems often require access to historical and cross-border data for AI agents to operate effectively,” he says.

Foundation for AI agents

Organisations can address those challenges by building a solid data foundation on which AI agents operate. SAS’s Day likens it to constructing a house. “If you don’t have a solid foundation, how could you ever build on top of it? By and large, banks and most other industries are lacking in very sound data management and data governance strategies, [which affects the stability of the data foundation]. The investments they make in AI, whether traditional or agentic, will not be realised without that.”

Echoing him, Cloudera’s Lim advises banks to adopt secure and governed data platforms that apply encryption, tokenisation, and masking consistently across on-premises and cloud environments. This ensures AI agents can access the data they need without breaching data sovereignty rules or exposing personally identifiable information.

To stay ahead of the latest tech trends, click here for DigitalEdge Section

A granular approach to data governance, supported by a zero-trust architecture, enforces another layer of protection. “In practice, this means no user, system, or agent is trusted by default. This involves accurately identifying where specific customer data resides, applying appropriate controls, and being prepared to produce detailed audit reports,” explains Lim.

He adds that metadata should be centrally managed with tools such as Cloudera’s Shared Data Experience while data lineage and catalogue platforms like Cloudera Octopai can unify the entire data estate. Doing so eliminates blind spots, strengthens data governance and helps ensure sensitive information is protected from unauthorised access and insider threats.

For Pure Storage’s Oostveen, the path forward lies in a hybrid approach. Companies must identify which datasets and workloads are too critical and sensitive to leave sovereign environments and which can safely run in public clouds. “Those decisions should not be made in isolation by just the IT team. They must be made in collaboration with the legal team, chief operating officer, chief data officer and even the board from a governance perspective,” he stresses.

He also advises banks to focus on three areas when choosing a sovereign service provider. Providers must also demonstrate jurisdictional independence, operational resilience and the ability to meet local audit requirements without locking in customers or driving up costs.

Banks should also focus on building a culture of trust and transparency, says Lim. That means managing expectations around how data is used and where the ethical boundaries of AI agents lie. In transaction-monitoring models, for example, institutions should maintain version control, conduct model testing and keep audit logs so compliance teams can retrace how an agent flagged a suspicious transaction.

Clear guardrails are equally important. Escalation thresholds, automated alerts that always route to human investigators, and dashboards tracking agent behaviour help ensure accountability. “Having these in place not only reassures regulators but also builds consumer confidence that AI is being applied responsibly in sensitive areas of banking,” says Lim.

In sum, AI agents are set to transform banking, but their promise will fall short without strong data integration, sovereignty safeguards and transparency. The message is clear: get the data foundation right first.

×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2025 The Edge Publishing Pte Ltd. All rights reserved.