• Revenue Cycle Management
  • COVID-19
  • Reimbursement
  • Diabetes Awareness Month
  • Risk Management
  • Patient Retention
  • Staffing
  • Medical Economics® 100th Anniversary
  • Coding and documentation
  • Business of Endocrinology
  • Telehealth
  • Physicians Financial News
  • Cybersecurity
  • Cardiovascular Clinical Consult
  • Locum Tenens, brought to you by LocumLife®
  • Weight Management
  • Business of Women's Health
  • Practice Efficiency
  • Finance and Wealth
  • EHRs
  • Remote Patient Monitoring
  • Sponsored Webinars
  • Medical Technology
  • Billing and collections
  • Acute Pain Management
  • Exclusive Content
  • Value-based Care
  • Business of Pediatrics
  • Concierge Medicine 2.0 by Castle Connolly Private Health Partners
  • Practice Growth
  • Concierge Medicine
  • Business of Cardiology
  • Implementing the Topcon Ocular Telehealth Platform
  • Malpractice
  • Influenza
  • Sexual Health
  • Chronic Conditions
  • Technology
  • Legal and Policy
  • Money
  • Opinion
  • Vaccines
  • Practice Management
  • Patient Relations
  • Careers

Toward a science of scaling medical artificial intelligence

Blog
Article

Medical AI stakeholders must work together to create reimbursement models that incentivize broader and appropriate use of medical AI that also ensure financial sustainability

Tinglong Dai: ©Johns Hopkins

Tinglong Dai: ©Johns Hopkins

These days, you can barely spell “health care” without the letters “A” and “I” — and for good reason. In an era of aging populations, declining productivity, rising costs, and disparities in access to care, artificial intelligence presents a rare opportunity to break the vicious cycle by improving health care access, outcomes, and productivity while lowering costs.

And it’s more than just an opportunity. As of May 2024, 882 AI-powered medical devices have been cleared for clinical use by the U.S. Food and Drug Administration. The recent boom in generative AI has further fueled enthusiasm for medical AI.

However, while rapid progress is being made in developing AI tools, widespread — and appropriate — use of trusted, rigorously validated medical AI remains out of reach for too many, as evidenced by insurance claims data for FDA-approved medical AI devices. A science of scaling medical AI is needed. Two recent papers shed light on this emerging science, which focuses on integrating AI into health care workflows.

Michael Abramoff: ©University of Iowa

Michael Abramoff: ©University of Iowa

In a first-of-its-kind randomized controlled trial published in the Nature group’s NPJ Digital Medicine, we collaborated with Orbis, an international nonprofit widely known as the “flying eye hospital,” and a team of clinicians in Bangladesh to show how a rigorously validated autonomous AI system for diabetic eye exams increased clinical productivity. The results were striking: the autonomous AI system resulted in a 40% increase in completed patient encounters per hour compared to the control group, and higher physician satisfaction compared to the non-AI control group. Controlling for complexity, we show a more than 200% increase in clinician productivity.

Our study provides the first real-world randomized controlled trial — the gold standard of scientific research — to demonstrate that autonomous AI can improve clinical productivity and increase access to care, especially in resource-constrained settings.

Now, even for medical AI systems that have demonstrated clear real-world evidence of clinical and productivity benefits, the question of how to pay for them is an open one. Consider Pear Therapeutics, a developer of AI therapeutic software that collapsed due to lack of insurance reimbursement, despite obtaining regulatory approvals, demonstrating improved patient outcomes, and establishing distribution partnerships.

There is no substitute for sustainable reimbursement models for the long-term viability of medical AI solutions. Together with James Zou of Stanford University, we recently published an article in NEJM AIthat explores different paths medical AI creators can take towards reimbursement, provided that basic ethical requirements are met. We paid particular attention to the roles of different stakeholders and how different reimbursement systems align incentives and make it easier for medical AI to be used by more people.

Consider two common payment models: fee-for-service (FFS) and value-based care (VBC). FFS models provide transparency of financial impact and are well understood within the health care system, making evaluation of financial impact straightforward. However, for a novel AI that has already been rigorously validated and demonstrated clinical impact, FFS can be difficult to achieve on the timelines that physician-led startups operate under: it can take years for an AI device to progress through the creation of a CPT III then CPT I code then AMA RUC then CMS reimbursement decisions that are sustainable, and this process is a huge drain on an AI startup’s limited resources. Few AI creators have the resources to qualify for FFS.

VBC, on the other hand, ties reimbursement to process and population health metrics that are deemed valuable. In certain cases, VBC models may require less time and resources to generate evidence, including standard of care (or preferred practice patterns) and quality measures for the AI under consideration. However, determining the financial impact of AI using VBC models on a health system can be difficult due to the complex, indirect, and discontinuous nature of some quality measures and the complexity of risk adjustment.

Neither model is sufficient to scale medical AI to the levels needed to realize its potential to close the productivity gap. We need new ways to pay for medical AI that incentivize scaling those AIs that are trusted and validated to meet the needs of diverse stakeholders, including patients, providers, insurers, ethicists, and AI developers and investors. Currently, many AI reimbursement initiatives are being discussed by stakeholders and lawmakers. Since the primary concern of payers and other stakeholders is to have confidence in the validity, efficacy, and safety of a candidate before deciding on reimbursement, one option is to have a technology assessment body that evaluates the safety and efficacy of an AI according to established bioethical principles and where the required evidence of patient benefit, such as access, clinical outcomes, and quality, has been pre-specified so that AI developers know what to work toward.

Imagine a health care system where high-quality AI systems support clinical decisions, streamline workflows, and enable clinicians to apply their expertise where it is most needed. By incentivizing the widespread and appropriate use of AI by clinicians, we create a strong incentive for AI developers to continually improve their systems — because it is financially beneficial for them to do so. This virtuous cycle leads to better AI devices and better population health outcomes. Together, our work envisions a future where AI not only improves clinical productivity, but is supported by sustainable reimbursement models that ensure equitable access to its benefits.

The evidence is clear: AI has the potential to dramatically improve health care delivery. Yet, we urgently need a science of scaling medical AI to realize this potential. Medical AI stakeholders must work together to create reimbursement models that incentivize broader and appropriate use of medical AI — ensuring financial sustainability is key, and there is no substitute for it. By aligning incentives and fostering collaboration, we can create a more effective, productive, and equitable healthcare system.

Tinglong Dai, PhD (@TinglongDai) is the Bernard T. Ferrari Professor at the Johns Hopkins Carey Business School, co-chair of the Johns Hopkins Workgroup on AI and Healthcare, part of the Hopkins Business of Health Initiative, and Vice President of Marketing, Communication, and Outreach at INFORMS, the world's largest professional association of decision and data sciences. Michael D. Abramoff, MD, PhD (@MichaelAbramoff) is the Robert C. Watzke, MD Professor of Ophthalmology and Visual Sciences at the University of Iowa, with a joint appointment in the College of Electrical and Computer Engineering. He is the founder and executive chairman of Digital Diagnostics, the first company to receive FDA de novo clearance for an autonomous AI diagnostic system, and also leads the Healthcare AI Coalition and is the founder of two of the FDA's Collaborative Communities on AI.

Recent Videos
Kyle Zebley headshot
Kyle Zebley headshot
Kyle Zebley headshot
Michael J. Barry, MD
Hadi Chaudhry, President and CEO, CareCloud