News|Articles|October 29, 2025

Physicians who use AI face a ‘competence penalty,’ Johns Hopkins study finds

Fact checked by: Keith A. Reynolds
Listen
0:00 / 0:00

Key Takeaways

  • Physicians using AI as a primary decision tool are perceived as less competent by peers, affecting their clinical skill ratings.
  • AI's role as a verification tool reduces stigma but does not eliminate it, highlighting the need for careful integration.
SHOW MORE

Clinicians relying on AI-powered decision-making were considered less skilled and less competent by their peers than those who did not use AI.

With new artificial intelligence (AI) tools rolling out each week, physicians are repeatedly urged to keep up with the times, innovate and try them out — just try not to look like you need them. A study from Johns Hopkins University, published in npj Digital Medicine, quantifies that tension.

When physicians appeared to rely on generative AI to make a treatment decision, their peers docked them on clinical skill and overall competence. The effect was strongest when AI was framed as the primary decision-maker, and it lessened — though it did not disappear — when AI was merely used as a verification step.

“AI is already unmistakably part of medicine,” said Tinglong Dai, Ph.D., professor at the Johns Hopkins Carey Business School and co-corresponding author of the study. “What surprised us is that doctors who use it in making medical decisions can be perceived by their peers as less capable. That kind of stigma, not the technology itself, may be an obstacle to better care.”

A closer look

Researchers ran a randomized experiment with 276 practicing clinicians — attending physicians, residents, fellows and advanced practice providers (APPs) — from a large academic health system. Participants read a brief vignette about adult diabetes management that differed only in how (or whether) AI was used:

  • No mention of AI being used (control)
  • AI used as the primary decision tool
  • AI used to verify a physician’s plan

After each scenario, participants rated the physician’s clinical skills, overall competence and the expected care experience.

Ratings fell as visible dependence on AI rose, with the verification condition landing between the control and the AI-primary condition. Statistical tests indicated that lower perceived clinical skill helped explain the drop in overall competence and expected care experience when AI was in the foreground.

Colleagues value accuracy, warily

Study participants acknowledged that AI could help improve accuracy in assessment and rated institutionally customized systems more favorably than generic ones, according to the paper. Still, visible dependence on AI was associated with lower peer ratings of clinical skill and overall competence.

“In the age of AI, human psychology remains the ultimate variable,” said first author Haiyang Yang, Ph.D., academic program director of the Master of Science in Management program at Carey.

“As AI becomes part of the future of medicine, it’s important to recognize its potential to complement — not replace — clinical judgment, ultimately strengthening decision making and improving patient care,” added co-corresponding author Risa M. Wolf, M.D., associate professor of pediatric endocrinology at Johns Hopkins School of Medicine with a joint appointment at Carey.

The bottom line

As things currently stand, visibly leaning on AI tools carries a peer-perception tax.

That said, there is nuance here. Using AI tools to double-check or verifying a physician’s plan, demonstrating independent reasoning and measuring outcomes locally can narrow the gap between what AI can add and how clinicians feel about colleagues who use it.

Newsletter

Stay informed and empowered with Medical Economics enewsletter, delivering expert insights, financial strategies, practice management tips and technology trends — tailored for today’s physicians.


Latest CME