Floating Button
Home Digitaledge Artificial Intelligence

Your AI is only as good as your data and software

Damien Wong
Damien Wong • 5 min read
Your AI is only as good as your data and software
Here's why organisations in Singapore must strike the right balance by allocating budgets effectively, investing not only in hardware but also in the software needed to power agentic AI. Photo: Unsplash
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

Artificial intelligence (AI) has moved from buzzword to boardroom priority. As businesses race to stay competitive and meet rising digital expectations, investments in AI are pouring in fast. Gartner and IDC project global AI spending to reach US$644 billion ($829 billion) in 2025, with Asia Pacific’s slice of that pie set to reach US$175 billion by 2028.

However, here’s the catch: the IDC report states that much of this AI spending is going into familiar areas, such as cloud platforms, data centres, and computing power. This follows a familiar pattern in every wave of technological change, where organisations first lay the groundwork to accelerate adoption at speed and then figure out everything else later down the line. Yet many are overlooking the most fundamental layer of all: data quality and software integrity.

To fully capture the benefits of AI, organisations must strike a balance: investing not only in the hardware that enables foundational AI capabilities but also in software quality, data governance, and preparing teams to work effectively alongside AI systems.

Here’s how organisations can move forward with confidence.

Software quality is the hidden key to successful AI

AI systems are only as effective as the data they learn from and the software they run on.

See also: OpenAI to expand in India with first office and hiring drive

They don’t just consume data once; they continuously learn from it. If errors or biases exist in the data pipeline, these flaws become amplified, resulting in persistent inaccuracies and flawed recommendations. To prevent this, organisations require robust software capable of validating data right from the entry point and maintaining its integrity throughout the pipeline.

Poor software quality can severely compromise data integrity at every stage, from collection and processing to storage and utilisation. Errors introduced during model training or deployment don’t just slow down innovation; they can directly translate into significant financial losses. In Singapore, more than 74% of organisations estimated annual losses of between $660,000 and $6.6 million from software defects and costly rework.

Flawed data quality can also lead to incorrect AI recommendations, while software quality issues can halt operations entirely, causing disruptive downtime and erosion of customer trust. It is no surprise that more than half (61%) of Singapore organisations reportedly face a significant outage risk within the next few years, especially in regulated industries like financial services.

See also: AI agents expected to drive revenue by Apac CFOs, adoption rising in Southeast Asia

But these risks are not inevitable. When organisations prioritise software quality from the start, they can safeguard their AI investments and ensure their systems perform reliably when it matters most. In a landscape where AI will increasingly define competitive advantage, building these foundations will ensure both success and scale.

Hardware over software, the AI investment imbalance

While infrastructure is undoubtedly critical, a singular focus risks underfunding crucial software quality assurance (QA) and AI lifecycle management, as even the most powerful hardware will fail to deliver meaningful results without rigorous testing and validation.

When companies prioritise hardware at the expense of QA, organisations often end up with brittle applications, inadequately tested models, and stalled AI initiatives. Under-investment in software testing means development teams lack the resources, tools, or time required to rigorously validate AI systems before deployment. As a direct consequence, nearly half (47%) of organisations reported having shipped untested or insufficiently validated code due to deadline pressures, significantly heightening risks of operational disruptions and reputational damage.

A balanced AI investment strategy for the agentic AI era

AI is rapidly evolving from systems that simply automate to agentic AI, which can act with greater autonomy, making decisions and taking actions with minimal human prompting. This shift towards agentic AI marks a new chapter for businesses, bringing both transformative potential and new risks that demand careful oversight.

Momentum for this is building fast. Industry projections suggest that by 2028, one-third of enterprise software applications will incorporate agentic AI, representing a significant increase from the current rate of less than 1%. Our research also echoes this optimism, with over 80% of respondents expecting significant productivity benefits as agentic AI increasingly handles repetitive and routine tasks.

To stay ahead of the latest tech trends, click here for DigitalEdge Section

Organisations in Singapore must strike the right balance by allocating budgets effectively, investing not only in the hardware and software needed to power agentic AI, but also in preparing the people to work alongside it. This means investing in comprehensive training programmes to prepare teams to interpret agentic output and maintain the human oversight that keeps these systems safe and effective.

In this new landscape, human supervision remains the one safeguard organisations can’t afford to ignore. In fact, nearly half (48%) of CIOs and CTOs rank ethical AI risk management as a critical skill, reflecting a broader industry recognition that without clear guardrails and contextual judgment, the promise of agentic AI could quickly become a liability.

The winning companies aren’t the ones with the most data, but rather those that train their AI systems on the right data, paired with quality software that can scale effectively. Speed and scale without quality come at a high cost.

We’ve already seen the fallout when untested systems falter: projects stall, customer trust erodes, and costs skyrocket.

In a landscape increasingly shaped by autonomous AI, long-term success will depend on the strength of what lies beneath: clean, trusted data and resilient, well-tested software.

Damien Wong is the senior vice president for Apac at Tricentis

×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2025 The Edge Publishing Pte Ltd. All rights reserved.