Philips Highlights the Importance of Trust in Healthcare AI
Amsterdam, Monday, 19 May 2025.
Philips has emphasized trust-building strategies in AI for healthcare, focusing on improving patient care while addressing ethical concerns.
Critical Trust Gap in Healthcare AI Adoption
Recent findings from Philips’ Future Health Index 2025 report, released on May 15, 2025, reveal a significant disparity in AI trust between healthcare professionals and patients. While 63% of healthcare professionals express optimism about AI’s potential to improve patient outcomes, patient confidence remains notably lower, with only 33% of patients aged 45 and older showing positive sentiment toward AI in healthcare [1]. This trust deficit emerges at a critical time when healthcare systems face mounting pressures, with patients in over half of the surveyed 16 countries waiting approximately two months or longer for specialist appointments [2].
Current Healthcare Challenges Driving AI Adoption
The urgency for AI integration is underscored by concerning statistics about healthcare delivery efficiency. According to the report, 75% of healthcare professionals lose valuable clinical time due to incomplete or inaccessible patient data, with one-third losing more than 45 minutes per shift - equivalent to 23 full days annually [2]. Moreover, 33% of patients have experienced deteriorating health conditions due to appointment delays, and more than one in four ultimately require hospitalization due to extended wait times [2]. Particularly alarming is the situation for cardiac patients, where 31% face hospitalization before their initial specialist consultation [2].
Professional Perspectives and Implementation Challenges
Healthcare professionals demonstrate a growing recognition of AI’s potential benefits, with 85% believing it can reduce administrative burdens and 74% seeing improved patient access as a key advantage [1]. However, significant concerns persist about implementation and accountability. While 69% of clinicians are involved in AI development, only 38% believe current AI tools adequately address real-world healthcare needs [2]. A striking 75% of clinicians express uncertainty about liability for AI-driven errors, highlighting the need for clearer regulatory frameworks [2].
Building Trust Through Collaboration and Transparency
Jeff DiLullo, Chief Region Leader of Philips North America, emphasizes that ‘AI is reshaping healthcare – but its future depends on trust, transparency and collaboration with clinicians and patients’ [1]. The comprehensive study, which gathered data from over 1,900 healthcare professionals and more than 16,000 patients across 16 countries [1], points to the critical need for educational initiatives and increased transparency in AI decision-making processes. Healthcare leaders suggest that by 2030, AI could potentially double patient capacity through automation of administrative tasks [2], but achieving this potential requires addressing current trust barriers and establishing robust safeguards for patient safety [2].