Artificial intelligence (AI) is shifting from a tool that demands constant human oversight to a collaborative partner in the workplace. This transformation promises greater efficiency and a fundamental reallocation of human capital, freeing people from routine tasks to focus on more strategic and creative work.
The economic impact of this change is significant. According to IDC, AI solutions and services are projected to have a global cumulative impact of US$22.3 trillion ($29.1 trillion) by 2030, representing approximately 3.7% of global GDP. Realising this value requires organisations to adopt AI agents that collaborate effectively with each other and with human teams to seamlessly integrate into workflows.
"AI agents are intelligent systems that exhibit reasoning, planning and memory capabilities. They're able to think multiple steps ahead and use tools (including working with software and systems) to get something done on your behalf and under your supervision. By working alongside employees, AI agents can drive operational efficiencies, enhance decision-making and increase innovation," says Google Cloud CEO Thomas Kurian at the Google Cloud Next 2025 event in Las Vegas.
He adds that Google Cloud is the platform of choice for AI as it "offers an AI-optimised platform with leading price, performance and precision; open and multi-cloud capabilities; and an enterprise-ready platform built for interoperability."
The backbone of a multi-agent ecosystem
While tech vendors have been promoting AI agents since late last year, Google Cloud is positioning itself as the enabler of a multi-agent ecosystem where AI agents can work together seamlessly, even if they are built on different platforms and systems.
See also: Xiaomi joins China AI game with maiden DeepSeek-like model
"Organisations will use different agentic platforms (such as Salesforce or SAP) to develop AI agents for various tasks. With no open-source protocol to connect them, getting AI agents to talk to each other requires complex re-engineering work, redesigning workflows, managing access to apps and more. These challenges will intensify when AI agents are used at scale across the enterprise," Mitesh Agarwal, Google Cloud's managing director of Technology and Solutions for Asia Pacific, tells The Edge Singapore.
The new Agent2Agent (A2A) protocol is Google Cloud's answer to the interoperability problem. It provides AI agents with a common, open language to communicate, securely exchange information and coordinate actions regardless of the framework or tech vendor they are built for. More than 50 industry leaders -- including Box, Intuit, Salesforce, SAP, PayPal and ServiceNow -- are already backing the protocol.
See also: DBS cracks the code to industrialising AI
To further help enterprises build AI agents at scale, Google Cloud has rolled out the Agent Development Kit (ADK) and the Agent Garden. With ADK, app and software developers can build AI agents with fewer than 100 lines of code while retaining granular control over their behaviour. ADK supports plug-and-play tools via the model control protocol and integrates securely with business application programming interfaces (APIs) through Apigee - making it easy to connect AI agents to enterprise systems without heavy engineering.
The Agent Garden complements this with a library of ready-to-use tools, templates and connectors built directly into ADK. It reduces time-to-value by enabling AI agents to tap into over 100 pre-built connectors, custom APIs, integration workflows and data stored within cloud systems like BigQuery and AlloyDB. This allows businesses to embed AI agents into real-world operations with minimal friction.
"We're taking a holistic approach to scaling AI agents and enabling a multi-agent ecosystem. Whether organisations want to build their own AI agents, use third-party solutions, or deploy ready-made AI agent samples, our horizontal platform (that is, Vertex AI) supports it all. And with the open A2A protocol, AI agents can seamlessly work together regardless of where or how they were built. We don't think our competitors are offering this level of openness and interoperability [across multiple systems, clouds and applications]," says Agarwal.
Cybersecurity and sovereignty needs
As enterprise infrastructure expands in scale and complexity, the cyberattack surface also grows. To stay ahead of emerging threats, organisations should consider deploying the following:
- Alert triage AI agent in Google Security Operations, which conducts dynamic investigations on behalf of human cyber defenders. It analyses the context of each alert, gathers relevant information, and renders a verdict on the alert, along with a history of the agent's evidence and decision-making.
- Malware analysis AI agent in Google Threat Intelligence, which analyses potentially malicious code (including the ability to create and execute scripts for deobfuscation) and delivers a final verdict with a summary of its analysis.
Both AI agents operate within Google Unified Security, which integrates visibility, threat detection, AI-driven security operations, continuous virtual red teaming, an enterprise browser and Mandiant's threat intelligence. Together, these form a unified, scalable and searchable cybersecurity data fabric across the entire attack surface, allowing organisations to anticipate threats and address risks proactively.
To stay ahead of the latest tech trends, click here for DigitalEdge Section
Organisations with strict sovereignty and regulatory requirements can now run AI workloads securely on-premises with the air-gapped version of Google Distributed Cloud (GDC), which provides access to Google Cloud services without a connection to the public Internet.
Singapore's Centre for Strategic Infocomm Technologies (CSIT) is an example. It uses GDC to access Google Cloud's AI and data services while processing large volumes of sensitive data in complete isolation from the public internet. This ensures CSIT maintains full operational control over its data and software, protecting mission-critical workloads.
To further extend these capabilities, Google is now bringing its Gemini AI model to GDC. This removes a long-standing trade-off for organisations dealing with sensitive information and facing regulatory or latency constraints, such as CSIT. Instead of relying solely on open-source models and self-managed infrastructure, they can now access state-of-the-art AI models while keeping data on-premises and operational complexity low.
Loh Chee Kin, CSIT's deputy chief executive, is looking forward to having that capability. He adds: "Having Gemini on GDC air-gapped means Google Cloud can offer us the full tech stack. We can not only use the latest Gemini model to develop generative AI apps but also deploy and manage them efficiently. This reduces manual toil and allows our teams to focus on higher-order work."
The AI infrastructure piece
Despite the rise of smaller, specialised AI models, organisations will increasingly rely on models with advanced reasoning and multimodal capabilities. These models will drive higher compute demands and accelerate the need for AI chip innovation.
This is why Google has been focusing on its custom-built tensor processing units (TPUs)- chips purpose-built to accelerate machine learning workloads. Optimised for deep learning, TPUs are said to deliver better performance per dollar than traditional GPUs and CPUs, helping enterprises lower AI infrastructure costs or extract more power from the same budget.
Ironwood is the latest TPU. "It is designed to gracefully manage the complex computations and communication demands of 'thinking models', which encompass large language models, a mixture of experts and advanced reasoning tasks. These models require massive parallel processing and efficient memory access," says Amin Vahdat, vice president and general manager of machine learning, systems and cloud AI at Google Cloud.
The company says Ironwood delivers computational performance that is 3,600 times more powerful and 29 times more energy-efficient than the original TPUs launched in 2013.
Google is also helping enterprises navigate growing network complexity with Cloud WAN, its fully managed wide area network (WAN) service. As companies grapple with fragmented network architectures, inconsistent security, and the constant trade-offs between cost, reliability and speed, AI adoption is adding new pressure. Modern AI workloads demand a highly distributed infrastructure (spanning multiple clouds and on-premise data centres) along with scalable, secure and efficient connectivity.
Cloud WAN offers a unified solution that securely and reliably connects users, applications and sites worldwide, while reducing costs. Google says the service can deliver up to 40% faster performance than the public Internet and minimise total cost of ownership by up to 40% compared to customer-managed WANs.
Kurian says that Cloud WAN lets enterprises use Google's backbone network to connect their global operations with greater reliability and security. This, in turn, enables resilient mission-critical applications, zero-trust security to protect their data and a performant network for power-intensive real-time apps.