News
Article
Even administrative use of artificial intelligence lowers perceived empathy, trust and competence, according to new research.
© Answer 7 - stock.adobe.com
As artificial intelligence (AI) continues to become more embedded in everyday medical workflows, a new study suggests that just mentioning its use could hurt a physician’s reputation with patients.
A team of researchers from the University of Wuerzburg and the University of Cambridge, composed of Moritz Reis, MSc, Florian Reis, M.D. and Wilfried Kunde, Ph.D., found that U.S. adults were significantly less likely to trust, feel empathy toward or seek care from a physician who advertised using AI — whether for diagnostic, therapeutic or even administrative tasks.
Published July 17 in JAMA Network Open, the study surveyed 1,276 randomized adults in January 2025 who viewed mock advertisements for a family physician.
Some ads made no mention of AI, while others specified its use in different capacities.
The result: in every case, mentioning AI use reduced scores for perceived competence, trustworthiness and empathy. Patients were also less likely to say they would book an appointment.
When compared with the control group — who were shown a physician ad with no mention of AI — those who saw ads referencing AI use rated the physician lower across the board.
For example, the average empathy rating for a physician not using AI was 4.00 out of 5. That rating dropped to 3.80 when AI was used for administrative work, 3.82 for diagnostic use and 3.72 for therapeutic use.
Trustworthiness scores followed a similar pattern. They fell from 3.88 in the control group to 3.66, 3.62 and 3.61, respectively.
Willingness to book an appointment dropped most dramatically when AI was mentioned for therapeutic purposes.
“In line with prior research, our results indicate that the public has certain reservations about the integration of AI in health care,” the authors wrote. “While the present effect sizes are relatively small, in particular regarding AI use for administrative purposes, they may be highly relevant as trust in health care practitioners is closely linked to subjective treatment outcomes.”
It is notable that even administrative AI — often thought of as less invasive or visible to patients — still negatively influenced perception. Across all AI use cases, physicians were seen as less competent than their non-AI-using peers.
Although it’s unclear exactly where the existing skepticism around AI originates, authors theorize it may stem from concerns that physicians over-rely on technology, or that AI use signals a more impersonal approach to care. Of course, there are also the broader concerns about data protection and cost implications of a physician opting in to new tech.
The researchers acknowledge that their experimental design, which relied on fictitious online ads, may not fully capture how real-world patients interact with AI in clinical settings. Would patients feel as strongly if they weren’t being asked about it? It’s difficult to say. Still, the researchers say the results point to a clear communication gap.
Physicians and practice managers using AI tools — particularly in independent practices where patient relationships are paramount — may want to proactively explain how these technologies improve efficiency and patient outcomes without replacing clinical judgment.
“From the physician’s perspective, it thus may be important to transparently communicate the rationale for using AI and to emphasize its potential benefits for the patient,” the authors wrote.
Stay informed and empowered with Medical Economics enewsletter, delivering expert insights, financial strategies, practice management tips and technology trends — tailored for today’s physicians.