Floating Button
Home Digitaledge Artificial Intelligence

Is the AI boom a bubble or a bet on execution?

Nurdianah Md Nur
Nurdianah Md Nur • 8 min read
Is the AI boom a bubble or a bet on execution?
Long-term forecasts remain bullish, yet most deployments still fall short in the near term. How companies scale AI in 2026 will decide whether the boom endures. Photo: Shutterstock
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

By 2030, half of the new economic value created by digital businesses across the Asia Pacific and Japan will come from companies investing in and successfully scaling AI today. By 2027, International Data Corp expects 60% of organisations to have deployed multi-agent systems, or autonomous AI agents that can act independently, coordinate tasks and orchestrate workflows with minimal human intervention.

Yet, that future hinges on overcoming a troubling reality. A July study by Massachusetts Institute of Technology (MIT) reveals that only 5% of integrated AI pilots are extracting millions in value, while the vast majority have yet to deliver any measurable impact on profit and loss.

The gap between these competing futures is widening as hundreds of billions of dollars pour into data centres, chips and models. Whether AI delivers on its promise or collapses under the weight of expectations depends on the choices organisations make in the next few years.

Not one AI bubble, but three

Concerns that AI may be sliding into a bubble are resurfacing, reviving anxieties from the late 1990s. However, a December report by Deutsche Bank Research argues the current cycle is more complex than a simple boom-or-bust replay. It suggests the market is better understood as three overlapping bubbles spanning valuations, investment and technology.

On valuations, public market excesses remain limited. Unlike the dot-com era, companies driving AI investment today are profitable, cash-generative and self-funding. The sharpest distortions sit in private markets, where unprofitable start-ups like OpenAI and Anthropic command multiples far above established firms. Private AI funding also remains robust, with Databricks raising over US$4 billion ($5.16 billion) this month through a Series L round at a US$134 billion valuation.

See also: Microsoft’s new programme targets faster AI rollouts in Singapore

Investment and execution present greater uncertainties. Led by hyperscalers, AI-related capital expenditure is forecast to reach historic levels by the end of the decade, even as many enterprises struggle to turn pilots into production systems capable of delivering sustained returns.

Technology is the third and least settled risk. While advances in large language models continue, Deutsche Bank Research cautions that generative AI still faces practical constraints at scale. These include reliability issues, hallucinations, rising costs and infrastructure bottlenecks.

The execution gap

See also: AI race: Alphabet, Amazon, Meta and Microsoft set for US$650 bil capex this year

Behind the three-bubble framework lies a more fundamental problem. Most organisations are pursuing AI experiments least likely to move the needle. MIT’s research found that initiatives that deliver real value tend to focus on narrow but high-impact use cases, integrate deeply into existing workflows, and scale through continuous learning rather than broad feature sets.

That diagnosis points to an execution gap rather than a technology failure, placing leadership and operational discipline at the centre of AI’s next phase.

“Scaling AI successfully requires humans in the lead, with leaders setting strategy, embedding oversight and ensuring AI is applied where it can truly make an impact. The journey starts with identifying ‘agentic hotspots’, or high-value functions where AI agents can unlock meaningful efficiency or growth, and simplifying or rethinking legacy processes before applying AI,” says Anoop Sagoo, CEO for Southeast Asia at Accenture.

Companies that succeed, he adds, pair those targeted opportunities with disciplined investment decisions, including what to build internally and what to source from ecosystem partners. That approach helps contain costs, avoid technical debt and turn pilots into measurable business outcomes. “A human-in-the-lead, process-first mindset is the difference between pilots that stall and AI that transforms.”

However, strategic clarity must be matched by operational readiness. “Many AI projects fail to progress beyond pilots due to scalability gaps and limited in-house capabilities. To move from proof of concept to real outcomes, enterprises need to strengthen three foundations: data readiness, governance, and cost discipline,” notes Fan Ho, executive director and general manager of Lenovo’s Solutions and Services Group for Asia Pacific.

Cost discipline starts with IT decisions. Instead of moving everything to the cloud by default, Ho says companies should choose where workloads run, combining cloud, on-premises and edge systems to balance cost, speed, control and regulatory needs.

Data infrastructure bottleneck

To stay ahead of the latest tech trends, click here for DigitalEdge Section

Strategy may set the ambition for AI, but data infrastructure determines whether those plans work in practice, more so as organisations try to scale more autonomous, agent-driven systems across the enterprise. Two-thirds of Asia Pacific businesses cite insufficient infrastructure for real-time data processing as a major challenge when accelerating AI adoption, according to Confluent’s 2025 Data Streaming Report.

“AI agents can only work as effectively as the data they can access. Organisations need to rethink their data strategies around real-time and do the hard but necessary work to fix data pipelines, enable continuous movement of real-time data and reinforce data governance,” says Nick Dearden, Confluent’s field chief technology officer. Modern data streaming platforms can help by connecting, streaming, processing and governing data in real time to provide contextual and trustworthy data that AI requires.

That challenge extends to how information is organised and accessed, particularly in multi-agent systems requiring both structured and unstructured data.

“AI agents require accurate and up-to-date structured data, such as customer data and financial records, as well as unstructured data like call transcripts and knowledge articles. To implement multi-agent systems, organisations must first redesign their data architecture to create a trusted and unified data foundation,” says Gavin Barfield, vice president and chief technology officer for Solutions at Salesforce Asean.

Autonomous AI agents also require speed, which places new pressures on enterprise infrastructure. Matthew Oostveen, vice president and chief technology officer for Asia Pacific and Japan at Pure Storage, says: “Unlike predictable model training, agentic AI is fast-moving, highly dynamic and extremely sensitive to delays. AI agents perform millions of rapid, random data lookups across multiple systems to make decisions instantly. Any delay in retrieving that information becomes an immediate performance bottleneck.”

Organisations must therefore unify how data is organised and accessed through a single data plane that reduces fragmentation, manages duplicates and keeps information close to the AI compute layer. “Mechanical drives simply cannot deliver the speed, concurrency or energy efficiency required for real-time autonomous operations,” argues Oostveen. “High-performance flash systems provide predictable, microsecond-level responsiveness, allowing organisations to move confidently from slow, limited AI trials to reliable, production-grade autonomous agents.”

The emphasis on speed, however, must be balanced against cost realities. This is especially for organisations managing vast volumes of training data alongside high-performance inference workloads.

“In the AI era, data is constantly in motion between hot, warm and cold tiers depending on usage. Enterprises need infrastructure that keeps pace with this fluidity through automated tiering, scale-out architectures and intelligent, software-defined management… to optimise both cost and performance,” notes Stefan Mandl, vice president of Sales and Marketing for APJC at Western Digital.

He adds that hard disk drives (HDDs) are well-suited as the foundation for large volumes of data as they integrate seamlessly with AI pipelines, keeping large datasets accessible and economical for training and retraining models. Meanwhile, higher-performance media like flash is reserved for caching, inferencing and metadata operations. “With scalable economics, adaptability and intelligent data management, enterprises can build a storage ecosystem that grows with demand, drives efficiency, and unlocks the full value of AI.”

The need for orchestration

Despite the promise of AI agents, few organisations expect them to operate in isolation. At scale, automation still depends on how effectively work is coordinated across AI agents, robots (like robotic process automation), and humans, says Amit Khandelwal, regional VP and managing director for Southeast Asia at UiPath.

Omega Healthcare, for example, uses UiPath’s platform to coordinate AI agents and automation with human oversight across its accounts receivable, denial management and payment posting processes. This enabled it to resolve denials and credit balances with significantly greater speed and accuracy. “This orchestrated strategy ultimately lowers costs and proves the transformative power of agentic automation in complex environments,” adds Khandelwal.

Governance and security at scale

As AI deployments expand from pilots to enterprise-wide systems spanning countries and business units, governance is becoming a gating factor rather than a box-ticking exercise. Once autonomous AI agents operate across critical platforms such as Oracle and SAP, organisations must be able to see where data resides, how it is accessed and what exposure exists.

“Organisations need granular control and masking to understand what is exposed and what must be protected,” says Beni Sia, general manager and senior vice president for Asia Pacific and Japan at Veeam. “Only then can the C-suite confidently put a stamp of approval on agentic automation, knowing outcomes are based on governed data and regulatory requirements are met.”

The security dimension adds urgency. “AI agents function as always-on digital employees, and if they are not intentionally governed, they can become one of the most powerful insider threats an organisation can create,” warns Tom Scully, principal architect for JAPAC at Palo Alto Networks.

As attackers shift from targeting humans to manipulating AI agents directly, he advises organisations to implement careful identity management with continuously validated access and real-time oversight. “Organisations need continuous discovery, posture management and runtime controls capable of detecting and blocking misuse at machine speed.”

The stakes are coming into focus. AI’s next phase will be shaped not by how much capital is deployed or how powerful models become, but by whether organisations can close the widening gap between ambition and execution before patience and confidence start to erode.

×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2026 The Edge Publishing Pte Ltd. All rights reserved.