“The guidelines aim to establish a set of expectations that are generally applicable across the financial sector, and may be applied in a proportionate manner – commensurate with the size and nature of FIs’ activities, use of AI, and their risk profiles,” it adds.
In its paper, the MAS has set out expectations for FIs in the oversight of AI risk management.
“Board and senior management of FIs play a key role in the governance and oversight of AI risk management, including the establishment and implementation of frameworks, structures, policies and processes for AI risk management, and fostering the appropriate risk culture for the use of AI,” the central bank explains.
The MAS also expects FIs to establish clear identification processes for the use of AI across the firm and maintain accurate and up-to-date AI inventories. FIs should also implement risk materiality assessments that factor impact, complexity and reliance dimensions.
See also: AI’s power problem is being overthought, says Microsoft’s VP for energy
Finally, the MAS expects FIs to plan for and implement “robust controls” in key areas such as data management, fairness, transparency and explainability, human oversight, third-party risks, evaluation and testing, as well as monitoring and change management. The controls should be applied based on their relevant and should be proportionate to the assessed risk materiality of the use of AI.
MAS’s deputy managing director, Ho Hern Shin, says the proposed guidelines will provide FIs with “clear supervisory expectations” to support them in leveraging AI in their operations.
“These proportionate, risk-based guidelines enable responsible innovation by financial institutions that implement the relevant safeguards to address key AI-related risks,” she adds.
See also: AI adoption is surging in Asia but are we using it right?
The public consultation paper is on MAS’s website. Interested parties should submit their comments by Jan 31, 2026.
Complementary industry handbook
To complement the proposed rules, the Project MindForge consortium will publish an “AI Risk Management Executive Handbook”. The handbook outlines key components of good AI risk management and will be followed by a detailed document with actionable insights and industry practices next year.
"The guidelines and the handbook will work together. For example, the Guidelines will require risk materiality assessments, while the Handbook will provide examples of how such assessments are done,” says Chia Der Jiun, managing director at MAS, at the Singapore FinTech Festival 2025 this morning.
The dual approach allows MAS to set flexible regulatory expectations while providing industry-developed best practices that can be updated as technology evolves. Financial institutions have told regulators that clarity on governance requirements will help them accelerate AI adoption safely.
