Banner

News

Article

JAMA: Medicine is flying blind on AI

Author(s):

Fact checked by:

Key Takeaways

  • AI's integration into healthcare outpaces evidence of its effectiveness, with many tools lacking rigorous testing and oversight.
  • The 2024 JAMA Summit on AI emphasized the need for standards in validation, transparency, and accountability to prevent inequities and safety issues.
SHOW MORE

A year after its landmark artificial intelligence summit, JAMA says health systems are deploying unproven algorithms with little evidence they improve outcomes — or even do no harm.

© LALAKA - stock.adobe.com

© LALAKA - stock.adobe.com

Artificial intelligence (AI) has moved faster than medicine’s ability to measure it, and a new JAMA special report warns that, while AI is already woven into nearly every layer of health care, evidence is proving its real-world value is still relatively scarce.

The special communication, “AI, Health, and Health Care Today and Tomorrow,” published Monday, October 13, compiles findings from the 2024 JAMA Summit on Artificial Intelligence, which took place a year prior in October 2024.

Led by Derek C. Angus, M.D., M.P.H., of the University of Pittsburgh, the report offers one of the clearest overviews yet of how AI is changing health care, and how little is known about its actual impact.

Evidence lags behind adoption

The capabilities of AI tools seem to be constantly reaching new heights — from diagnostic assistance to handling administrative tasks. More than 1,200 AI-enabled medical devices have cleared the U.S. Food and Drug Administration (FDA), most in imaging, and nearly 90% of U.S. health systems use some sort of AI.

Yet, the authors of the report note that most of these tools enter practice with minimal testing and virtually no post-market evaluation.

Many operate outside FDA oversight altogether. That includes documentation software, patient scheduling algorithms and prior authorization systems that influence care but aren’t classified as medical devices.

“Even for tools over which the FDA does have authority, as noted above, clearance does not necessarily require demonstration of improved clinical outcomes,” the report states.

From summit discussion to action plan

The report stems from the 2024 JAMA Summit on Artificial Intelligence, which brought together clinicians, technologists and policy experts to discuss how medicine can move from AI enthusiasm to accountability.

Their conclusion: without standards for validation, oversight and transparency, the technology risks amplifying inequities and creating new safety problems.

The panel outlined four steps to close the gap between innovation and evidence:

  1. Collaborate early and often. Clinicians, patients, regulators and developers should all have a role in how AI tools are built and monitored.
  2. Test real-world effectiveness. Safety checks and compliance audits aren’t enough; health systems need data on outcomes and performance.
  3. Share data responsibly. A national infrastructure could make it possible to study how AI performs across diverse populations and settings.
  4. Align incentives. Payment models and policy should reward responsible use, not just rapid adoption.

Former FDA Commissioner Robert Califf, M.D., whose remarks were cited in the report, put it bluntly, saying, “I do not believe there’s a single health system in the United States that’s capable of validating an AI algorithm that’s put into place in a clinical care system.”

Beyond efficiency

While automation has become a selling point, the report authors caution against viewing AI as a cure for burnout. Tools that streamline documentation or workflow might help, but only if they actually save time and improve patient care. Otherwise, they risk creating new pressures.

“If freed from administrative tasks, clinicians may be asked to see more patients, which could also cause burnout,” they wrote.

The authors also highlight the uneven rollout of AI literacy and technical support, especially in under-resourced settings. Without attention to fairness and equity, they wrote, digital tools could widen existing gaps rather than close them.

The report doesn’t propose new regulations, but its message is clear: medicine needs proof before promises. The authors call for “an ecosystem capable of rapid, efficient, robust and generalizable knowledge about the consequences of these tools on health.”

As practices race forward with deployment of the latest AI-powered tools, the report poses a harder question — whether the tools designed to make medicine smarter are, in fact, making it better.

Newsletter

Stay informed and empowered with Medical Economics enewsletter, delivering expert insights, financial strategies, practice management tips and technology trends — tailored for today’s physicians.

Related Videos
AI on Trial: A conversation with Deepika Srivastava, chief operating officer at The Doctors Company
AI on Trial: The future of liability in an AI-driven health care system
AI on Trial: How malpractice insurers can help physicians navigate new risks
© 2025 MJH Life Sciences

All rights reserved.