Corporate enthusiasm for artificial intelligence (AI) is colliding with hard reality. A July 2025 report by the Massachusetts Institute of Technology’s (MIT) Media Lab stated that 95% of corporate AI initiatives show zero return, underscoring the gap between experimentation and execution.
For Abhas Ricky, chief strategy officer at Cloudera, the numbers are unsurprising, saying that by definition, 99% of experiments in AI were supposed to fail. However, he expects enterprises to push past the initial stages. “If you have a business case defined, executed, agreed upon and aligned, you’ll commit to the experience. [This also prevents AI proof-of-concept] fatigue, even if you have a high degree of failure, because you know that [those efforts support] the business outcome.”
He adds that AI is a game of talent arbitrage. “If you have the right people, they will learn from their mistakes and failures. They're also able to adapt [even if] your systems and workflows [change].”
Furthermore, working on a proof-of-concept is about the core data engineering and not about the model or the agent. “It's about the core data engineering because you still have to ingest, prep and wrangle the data before serving it up for model hosting, retrieval, augmented generation, fine-tuning, etc.”
“[The reality is that] a lot of organisations do not have the fundamentals of the standard data engineering ETL (extract, loan, transform) process defined, and they do not have a very robust process around that,” he adds. For example, if a generative AI use case for credit-risk decisioning is launched while the relevant data sits across several providers and deployment models, and robust pipelines are not in place to bring that data together in the required location, the project is likely to fail.
Building trust in the age of AI
See also: The AI counter-revolution
As AI systems become more autonomous (also known as AI agents), data security, privacy and regular compliance have become concerns.
“We’ve all seen the Terminator movies,” says Cloudera CEO Charles Sansbury. "As agentic AI becomes more pervasive, customers are rightly concerned about having autonomous AI agents (regardless of who puts them there) operating within your data environments.”
“The early defences that we have are around data sovereignty, access control, and all the controls you put around software and technology. But I think the reality is that there's still an element of comfort by having your core business data assets in a place that you can own and control them,” he adds. “And we're actually seeing that motivation emerge, or resurge among customers because I think there's still much that is not known or understood about how these autonomous agents will react.”
See also: MAS consults on guidelines for AI risk management
On anecdotes of bots doing something unexpected, one of the ways chief data officers are getting around it is to deploy new technologies in a very controlled environment. However, he says it’s early days and the world will see headlines where “security things happen” within the environment, because the pervasiveness and capabilities of bad actors are growing every day.
With increasing regulations on AI and data sovereignty, Sansbury notes Cloudera’s ability to run in owned hardware/on-premises and in a private cloud environment. “We are architected to deliver that capability. We count as customers, several 100 government agencies around the world, many of whom have either data sovereignty requirements or in some cases air-gapped requirements, that the software runs in a contained environment. We see that as an increasingly important issue in many geographies, including Europe, the Middle East and Asia.”
Looking ahead, Ricky believes there will be more model providers, with large portions of the generative AI stack developed within Asia Pacific, which will have a much lower cost to serve. “So by definition, if that happens in a global world, there'll be a higher uptake of those services in the coming years and decades.”
