In defence, aviation and cybersecurity, milliseconds can mean the difference between safety and disaster. In these mission-critical environments, artificial intelligence (AI) must perform not just efficiently, but flawlessly. A chatbot can recover from a typo or malfunction, but an air-traffic control system simply can’t. This makes the challenge right now not to make AI more powerful. It is to make it completely trustworthy. Because power is worthless without trust.
Unlike generative or agentic AI powering chatbots and virtual assistants, critical-systems AI operates under strict, unforgiving conditions. Reliability, safety, and explainability take precedence over speed and novelty. Whether supporting pilots in the cockpit, optimising landing routines, or securing airspace from drones, these systems demand precision under pressure. In this context, trust is not just a byproduct of performance. It is the performance itself.
Critical systems are a different class of intelligence
Commercial AI and critical-systems AI belong to two very different worlds. The first optimises for user engagement, scalability, and convenience. The second operates under the watchful eye of regulators, system engineers, and operators, where reliability is paramount, and even the smallest errors are intolerable.
In a chatbot, a wrong answer leads to inconvenience or embarrassment. In an air-traffic management system, it can compromise safety and erode public trust. Where commercial AI thrives on vast, flexible datasets, critical-systems AI must integrate seamlessly with deterministic, rule-based systems and operate within tightly constrained hardware environments.
Critical systems also demand continuous uptime, operating 24/7. They often run in physically secure environments with layers of cybersecurity defence. AI in this context must balance statistical learning with model-based logic, merging data-driven algorithms with the rigour of physics-based or rules-based reasoning. This hybrid approach ensures that even as data changes, the underlying principles of safety and performance remain intact.
See also: Citrini’s dystopia fuels Asia’s breakaway from global AI gloom
To put it simply, “ordinary AI” does not work here. The large language models that commercial AI systems use are not transparent. They are trained on data hidden behind a cloak of secrecy that prevents the public from knowing exactly where the intelligence comes from. Critical-systems AI must be purpose-built and open to scrutiny. It must be trusted, explainable, and cyber-protected at every layer, from data collection to system integration.
Building AI you can trust
At the core of trustworthy AI is the simple fact that humans must remain in control. This philosophy is built around human-centric design. AI should empower, not replace, human expertise.
See also: Singtel's Digital InfraCo launches 'micro AI grid' with Nvidia to bridge pilot-to-production gap
For example, AI can help drone operators manage information overload during high-stakes missions, flagging anomalies or recommending actions to reduce mental fatigue. However, the human operator stays in the loop, validating and deciding based on experience, context, and judgment. Trustworthy AI amplifies human intelligence - it doesn’t substitute for it.
This principle forms the basis of the “Trustworthy AI” framework, structured around three pillars:
- Secure: AI must be protected from cyber threats, ensuring data integrity and resilience.
- Transparent: Models must be explainable, providing contextual justifications for their outputs.
- Valid: Systems must be tested, certified, and proven to do only what they are designed to do.
Shaping Singapore’s AI future
Singapore is fast becoming a global proving ground for this class of AI. Under its Smart Nation initiative and the National AI Strategy, the Little Red Dot is building an ecosystem that cleverly balances innovation with responsibility. Rather than chasing scale for its own sake, Singapore focuses on applied AI - solutions that enhance safety, resilience, and efficiency across industries.
Thales’ cortAIx accelerator, headquartered in Singapore, embodies this mission. The centre brings global research expertise to local applications and deploys the vibrant island’s breakthroughs worldwide. One pilot project is an AI-powered radar system that improves drone identification accuracy by 35% while cutting false alarms from birds and urban noise by 70%. This is obviously a critical advancement for dense urban airspaces that dot the region.
In another example, the Thales Avionics Industrial Centre in Singapore deployed an intelligent anomaly detection system to help technicians identify faults faster and more accurately. This delivered improved safety, streamlined operations, and a model that will be replicated globally. Each of these successes strengthens Singapore’s reputation as a leader in trustworthy, operational AI.
To stay ahead of the latest tech trends, click here for DigitalEdge Section
Critical action for innovation
To sustain this momentum, Singapore must focus on several strategic actions. Developing governance frameworks is essential to establishing standards for validation and certification across critical sectors, tailored to the nation’s regulatory and cultural context.
Nurturing strong public-private partnerships between agencies like the Defence Science and Technology Agency (DSTA), Home Team Science & Technology Agency (HTX), and Civil Aviation Authority of Singapore (CAAS) and industry leaders will accelerate safe and effective AI adoption.
At the same time, using hybrid AI models that combine data-driven learning with physics-based or rule-based systems will strengthen robustness and predictability.
Finally, empowering human-machine collaboration by keeping people in the loop will ensure that every AI decision remains explainable, traceable, and accountable.
These steps will help Singapore and the wider Asia Pacific region define what responsible AI deployment looks like in mission-critical environments.
Vision forward
For Asia’s critical infrastructure, from defence and aviation to smart cities, trustworthy AI is not optional. It is the foundation of safety, resilience, and economic confidence. As nations across the region accelerate AI adoption, the lesson is clear: reliability is innovation’s true test.
In Singapore, the next generation of AI is being built not in search of convenience but in service of trust. That trust, engineered through human-centric design and rigorous validation, will define the systems that keep our skies safe, our cities secure, and our future stable. Because in a world that demands flawless performance and millisecond precision, trust is not a nice to have.
Dr Marko Erman is the SVP and chief scientific officer of Thales
