
Want patients to trust AI in health care? Tell them humans are biased, too
Key Takeaways
- Highlighting human biases can increase patient receptivity to AI in healthcare by enhancing perceptions of AI's fairness and integrity.
- The study involved nearly 1,900 participants and showed that bias salience reduced resistance to AI-driven recommendations.
Study shows patients are more receptive to AI recommendations when they better understand the biases in human decision-making.
As
Understanding bias salience
Published in the journal
Lead researcher Rebecca J. H. Wang, an associate professor of marketing at Lehigh University, explained that bias is often viewed as a human shortcoming. "When the prospect of bias is made salient, perceptions of AI integrity—defined as the perceived fairness and trustworthiness of an AI agent relative to a human counterpart—are enhanced," she said in a statement.
Key findings
The study involved nearly 1,900 participants across six experiments, each designed to evaluate how patients responded to health care recommendations, such as coronary bypass surgery or skin cancer screening, when given by either a human provider or an AI-driven assistant. Some participants were primed to think about bias beforehand, including reviewing common cognitive biases or reflecting on personal experiences with bias in health care.
The results:
- Participants who were reminded of potential biases in human health care rated AI as offering greater "integrity," meaning they saw it as more trustworthy and fair.
- While most people still preferred human health care, bias salience reduced resistance to AI-driven recommendations, likely because people associated bias more strongly with human providers.
- When bias salience was high, participants placed more value on AI’s perceived objectivity compared to the subjectivity of human providers.
Implications for the future of AI in medicine
As AI becomes increasingly integrated into health care, from diagnostics to treatment recommendations, this study suggests that addressing patient concerns about human bias could help ease their resistance to AI. For health care providers, discussing the limitations of human judgment and emphasizing the objectivity AI can offer may foster a more trusting relationship between patients and emerging technologies.
AI’s role in health care is only expected to grow, with billions expected to be invested in the coming years. Researchers say developers of AI systems will need to focus on minimizing bias in training materials and providing clear context about human biases as part of the patient experience with AI.
Wang says, "By addressing patients’ concerns about AI and highlighting the limitations of human judgment, health care providers can create a more balanced and trusting relationship between patients and emerging technologies."
Newsletter
Stay informed and empowered with Medical Economics enewsletter, delivering expert insights, financial strategies, practice management tips and technology trends — tailored for today’s physicians.