Artificial intelligence (AI) and generative AI investments in the Asia Pacific are projected to grow at a CAGR of 24% to reach US$110 billion ($143 billion) by 2028, according to market research firm International Data Corp. The region’s banking sector alone is expected to increase digital transformation spending from US$30.4 billion in 2024 to US$48.6 billion ($62.9 billion) by 2027, with much of it directed toward AI-powered solutions for fraud detection, personalised recommendations, and automated customer service.

Yet, translating this investment into effective implementation and measurable impact remains a challenge for many financial institutions. Recognising this, the Monetary Authority of Singapore (MAS) has launched PathFin.ai, or the PathFinder Programme, which helps financial institutions earlier in their AI journey to adopt proven and responsible solutions. The programme builds a library of real-world AI use cases, industry-validated solutions and best practices to help financial institutions scale AI more effectively.

MAS’s Chief FinTech Officer Kenneth Gay outlines how PathFin.ai is accelerating effective and responsible AI adoption across the financial services sector.

What specific market gaps or industry pain points prompted the creation of PathFin.ai? How will PathFin.ai bridge the gap between sophisticated AI users and laggards?

When MAS engaged financial institutions on their AI adoption, we identified three key gaps. First is information asymmetry, wherein financial institutions find it a challenge to identify AI solutions that have been successfully deployed by other financial institutions. Second is the capability gaps. Even when financial institutions found good solutions, they lacked the internal expertise to implement them effectively. Third is fragmented learning as AI knowledge was scattered across financial institutions, vendors and other institutions with no readily available trusted source.


See also: What happened to blockchain?

mute
The PathFin.ai programme bridges these gaps by bringing together financial institutions to share their AI implementation experiences. The programme features a knowledge hub with successful use cases curated by industry participants in key areas such as sales and marketing, customer operations, risk management, and engineering and technology. It also showcases learnings from pathfinder financial institutions that have successfully deployed AI in these areas.

MAS collaborates with industry partners to progressively enhance the knowledge hub with more peer-validated use cases, resources and solutions. This helps deepen financial institutions’ understanding of how AI can be applied and reduces the time taken when implementing AI solutions. The knowledge hub also features resources on workforce planning and case studies of successful role redesign, along with upskilling pathways that finance professionals can access.

Through this collaborative approach, PathFin.ai enables knowledge exchange where early adopters’ experiences benefit the entire ecosystem and help uplift the sector’s AI capabilities.

Could you elaborate on the process of how use cases and solutions are selected and industry-validated? What are the specific criteria, and who are the key decision-makers in this?


See also: Connecting Asean with Project Nexus

Our curation process is built around what we call “PathFinder” financial institutions, which are institutions that have implemented AI solutions in their live environments and are willing to share their real-world experiences with the broader community.

We only consider solutions that have demonstrated measurable results in actual financial services deployments.

In terms of decision-making, the PathFinder financial institutions themselves serve as the primary validators. They share their implementation experiences and results, while their peers evaluate whether these solutions would work in their own contexts. MAS facilitates this process and provides the framework, but the industry expertise and validation come directly from the practitioners.

Through the PathFin.ai knowledge hub, there is an ongoing feedback loop where financial institutions continuously share their experiences and outcomes. Rather than a one-time assessment, solutions are validated through peer review and real-world performance data from multiple institutions over time.

PathFin.ai offers “proven AI solutions”, but AI technology evolves rapidly. How do you ensure those solutions remain safe and relevant? What mechanisms exist if a listed model fails or introduces bias?

First, it’s important to understand that the responsibility of implementing AI solutions remains entirely with each participating financial institution. Financial institutions maintain full responsibility for their own risk assessment for every AI use case, their implementation decisions, AI ethics frameworks, and ongoing monitoring. 

What we seek to enable is a mechanism where participating PathFinder institutions share their real-world experiences (both successes and challenges) with their peers. This creates greater awareness about what works in different situations and what emerging issues financial institutions should be aware of. Our role is to facilitate these learning so that financial institutions can adapt to the evolving AI landscape.

As AI rules tighten globally — from the EU AI Act to potential US frameworks — how does PathFin.ai support multinational banks in staying compliant across jurisdictions?

The responsibility for ensuring compliance with different jurisdictions’ AI regulatory frameworks rests with financial institutions themselves.

Each multinational institution must conduct its own jurisdiction-specific compliance assessments and ensure that it meets the requirements of every jurisdiction where it operates. 

Where PathFin.ai adds value is through our collaborative approach to knowledge sharing. We facilitate the exchange of experiences from PathFinder institutions that have successfully navigated AI implementations across different regulatory environments. 

Much of the bottleneck for AI adoption is cultural. How do you expect PathFin.ai to change the way boards and risk committees think about AI deployment?

One of the focus areas of PathFin.ai is to engender greater understanding amongst financial institutions’ leadership on what AI-in-Finance is capable of from multiple perspectives, including opportunities, return-on-investment, risk management and responsible AI.

Through concrete examples of how peer institutions have successfully implemented AI solutions with actual business outcomes, it shifts the conversation from ‘whether to adopt AI’ to ‘how to adopt AI responsibly.’

PathFin.ai is also working with education partners, such as the Institute of Banking and Finance (IBF), to curate executive-level courses that focus on AI governance frameworks, practical risk management, and decision-making on AI.

Pathfin.ai partners with training providers. What specific skills gaps are you addressing, and how are you ensuring workforce transformation keeps pace with AI advancements? 

The skills gaps we are addressing reflect the evolving nature of work as AI becomes more prevalent across the financial sector. Rather than predicting specific impacts, we are taking a proactive approach to ensure our workforce is prepared for various scenarios of AI adoption.

All workers will benefit from foundational AI skills, such as prompt design, understanding AI principles, and applying governance and ethical considerations in their roles. These foundational capabilities will help workers engage effectively with AI tools regardless of how their specific roles may evolve.

We are also supporting role-specific training in areas where financial institutions are actively exploring AI applications, such as sales and marketing, customer operations, risk management, and engineering and technology. The specific training needs will depend on how individual firms choose to adopt AI and redesign their processes.

In the AI era, learning agility, adaptability, and soft skills such as strategic thinking will be just as important as technical know-how. These human-centric capabilities will be essential for individuals to work effectively alongside AI and exercise sound judgment.

Together with our training partners, we are focusing on three key areas. This includes strengthening foundational AI and data literacy across the workforce; deepening technical and applied AI capabilities among practitioners; and developing leadership and change management skills to drive responsible transformation.

As AI adoption progresses, MAS and IBF are working closely with industry partners to understand how jobs may be changing and to support financial institutions in proactively upskilling their workforce. We encourage firms adopting AI to concurrently equip their staff with the skills to harness AI responsibly and enhance productivity.

We are encouraged by financial institutions that are taking the lead in preparing their workforce and sharing their experiences via PathFin.ai, thereby supporting peer learning and exchange. We will continue to work together with industry and unions as a tripartite community to help realise AI’s potential for better outcomes for all.