News
Article
A year after its landmark artificial intelligence summit, JAMA says health systems are deploying unproven algorithms with little evidence they improve outcomes — or even do no harm.
© LALAKA - stock.adobe.com
Artificial intelligence (AI) has moved faster than medicine’s ability to measure it, and a new JAMA special report warns that, while AI is already woven into nearly every layer of health care, evidence is proving its real-world value is still relatively scarce.
The special communication, “AI, Health, and Health Care Today and Tomorrow,” published Monday, October 13, compiles findings from the 2024 JAMA Summit on Artificial Intelligence, which took place a year prior in October 2024.
Led by Derek C. Angus, M.D., M.P.H., of the University of Pittsburgh, the report offers one of the clearest overviews yet of how AI is changing health care, and how little is known about its actual impact.
The capabilities of AI tools seem to be constantly reaching new heights — from diagnostic assistance to handling administrative tasks. More than 1,200 AI-enabled medical devices have cleared the U.S. Food and Drug Administration (FDA), most in imaging, and nearly 90% of U.S. health systems use some sort of AI.
Yet, the authors of the report note that most of these tools enter practice with minimal testing and virtually no post-market evaluation.
Many operate outside FDA oversight altogether. That includes documentation software, patient scheduling algorithms and prior authorization systems that influence care but aren’t classified as medical devices.
“Even for tools over which the FDA does have authority, as noted above, clearance does not necessarily require demonstration of improved clinical outcomes,” the report states.
The report stems from the 2024 JAMA Summit on Artificial Intelligence, which brought together clinicians, technologists and policy experts to discuss how medicine can move from AI enthusiasm to accountability.
Their conclusion: without standards for validation, oversight and transparency, the technology risks amplifying inequities and creating new safety problems.
The panel outlined four steps to close the gap between innovation and evidence:
Former FDA Commissioner Robert Califf, M.D., whose remarks were cited in the report, put it bluntly, saying, “I do not believe there’s a single health system in the United States that’s capable of validating an AI algorithm that’s put into place in a clinical care system.”
While automation has become a selling point, the report authors caution against viewing AI as a cure for burnout. Tools that streamline documentation or workflow might help, but only if they actually save time and improve patient care. Otherwise, they risk creating new pressures.
“If freed from administrative tasks, clinicians may be asked to see more patients, which could also cause burnout,” they wrote.
The authors also highlight the uneven rollout of AI literacy and technical support, especially in under-resourced settings. Without attention to fairness and equity, they wrote, digital tools could widen existing gaps rather than close them.
The report doesn’t propose new regulations, but its message is clear: medicine needs proof before promises. The authors call for “an ecosystem capable of rapid, efficient, robust and generalizable knowledge about the consequences of these tools on health.”
As practices race forward with deployment of the latest AI-powered tools, the report poses a harder question — whether the tools designed to make medicine smarter are, in fact, making it better.
Stay informed and empowered with Medical Economics enewsletter, delivering expert insights, financial strategies, practice management tips and technology trends — tailored for today’s physicians.