Building Trust in AI Healthcare: Lessons from South-East Asia
Bangkok, Friday, 5 December 2025.
South-East Asian health systems showcase effective strategies for integrating AI in healthcare, emphasizing verifiability and accountability to build patient trust and enhance global health practices.
The Trust Gap in AI Healthcare
Despite the high adoption rates of AI in healthcare across South-East Asia, with 80% of residents reportedly using AI for healthcare services, trust in these technologies remains fragile. This is particularly evident in sensitive areas such as mental health, where trust levels significantly decline. The lack of transparency in AI-generated risk scores and the diffusion of accountability across developers, hospitals, and governments contribute to a foundational trust gap [1].
Strategies for Building Trust
To address these challenges, South-East Asian countries have initiated several strategies promoting transparency and accountability. Singapore’s Health Promotion Board has established the HPRG Innovation Office, which focuses on ethical AI practices, while Indonesia’s BPJS Digital Health Transformation Strategy emphasizes digital safety in public health [1]. These initiatives underscore the importance of introducing ‘minimum viable assurance’ metrics such as clinician override rates, which serve as indicators of AI reliability [1].
Lessons from Global Perspectives
Globally, the need for transparent AI systems is echoed by the World Health Organization and the European Union, which mandate audit mechanisms and human oversight to ensure model verifiability [1]. A study from 2025 highlights that clinician override rates—a measure of skepticism—are significantly lower for trustworthy AI predictions (1.7%) compared to those perceived as opaque (over 73%) [1]. This finding suggests that transparency and clarity in AI operations can enhance clinician trust and adoption.
Implications for Future AI Integration
The strategic initiatives in South-East Asia provide a framework for AI integration that could be adopted globally. By prioritizing transparency and accountability, healthcare systems can improve patient engagement and trust, leading to more effective digital health solutions. The introduction of continuous monitoring and patient feedback mechanisms, such as ‘teach-back’ checkpoints, can further solidify trust and assure patients of their comprehension and safety in AI-driven healthcare [1].