Trust in AI-assisted health systems and AI’s trust in humans
Tinglong Dai, Mario Macis, Michael Darden, Madeline Sagona
Abstract
Artificial intelligence (AI) is reshaping healthcare, promising improved diagnostics, personalized treatments, and streamlined operations. Yet a lack of trust remains a persistent barrier to widespread adoption. This Perspective examines the web of trust in AI-assisted healthcare systems, exploring the relationships it shapes, the systemic inequalities it can reinforce, and the technical challenges it poses. We highlight the bidirectional nature of trust, in which both patients and providers must trust AI systems, while these systems rely on the quality of human input to function effectively. Using models of care-seeking behavior, we explore the potential of AI to affect patients’ decisions to seek care, influence trust in healthcare providers and institutions, and affect diverse demographic and clinical settings. We argue that addressing trust-related challenges requires rigorous empirical research, equitable algorithm design, and shared accountability frameworks. Ultimately, AI’s impact hinges not just on technical progress but on sustaining trust, which may erode if biases persist, transparency falters, or incentives misalign.
Citation
Sagona, M., Dai, T., Macis, M. et al. Trust in AI-assisted health systems and AI’s trust in humans. npj Health Syst. 2, 10 (2025). https://doi.org/10.1038/s44401-025-00016-5