News
Article
As use of artificial intelligence grows rapidly in health care, the American Medical Association brought together an expert panel to highlight policy changes, transparency concerns and implications for physician practices.
AI in medicine © ipopba - stock.adobe.com
As we approach the midway point of 2025, artificial intelligence (AI) is continuing its swift march into every facet of health care — and policymakers are scrambling to catch up. An American Medical Association (AMA) Advocacy Insights webinar Tuesday afternoon brought together health policy experts who cautioned physicians and practice leaders about the evolving, and often inconsistent, regulatory landscape around AI tools.
AMA President Bruce A. Scott, MD, set the tone by emphasizing that, while physicians are optimistic about AI’s potential in health care, they have major concerns about liability, transparency and patient privacy.
“At the AMA, we like to refer to AI, not as artificial intelligence, but rather augmented intelligence, to emphasize the human component,” Scott said. “Patients should know that whatever the future holds, a physician will remain at the heart of their care.”
Jared Augenstein, senior managing director at Manatt Health, said more than 1,000 AI-enabled medical devices have received U.S. Food and Drug Administration (FDA) clearance, and around half of all physicians already use some form of AI in practice.
However, the pace of AI adoption has outstripped regulators’ ability to respond. “State and federal policymakers are grappling with how to balance innovation and rapidly advancing technology against concerns about accuracy, bias, privacy and other factors,” Augenstein said.
At the state level, more than 250 AI-related bills have been introduced this year alone, Augenstein noted. Most address transparency, anti-discrimination measures, payer use of AI and clinical applications. Still, the results remain uneven, leaving many states without clear guidelines.
Shannon Curtis, JD, the AMA’s assistant director of Federal Affairs, echoed the uncertainty on the federal side. She noted that the Biden administration’s detailed AI executive order was swiftly repealed earlier this year by the Trump administration, replaced with a new directive calling for deregulation.
Curtis described the current federal landscape as “an incredibly unsettled environment lacking clear direction,” adding that recent legislative proposals, including a controversial House budget provision imposing a 10-year moratorium on new state AI regulations, have only deepened concerns.
The panelists repeatedly highlighted the application of AI tools in prior authorization as an area of significant concern for physicians, citing the AMA’s 2024 Prior Authorization Physician Survey, which found that 61% of physicians worry that Ai will further increase denial rates for necessary treatments, adding to existing frustrations.
Emily Carroll, JD, a senior attorney at the AMA, underscored that physicians’ apprehensions are well-founded, noting the survey revealed 82% of physicians say prior authorization delays sometimes result in patients abandoning necessary treatment entirely.
Scott himself recounted personal interactions, illustrating two starkly different insurer approaches. He criticized insurers “leveraging AI primarily to deny prior authorizations more rapidly,” but also praised a Blue Cross Blue Shield plan explicitly using AI to expedite approvals — a direction the AMA supports.
Panelists stressed transparency as key to protecting physicians from unintended liability. Carroll said physicians need clear and detailed information about the limitations, accuracy and validation of AI tools they incorporate.
“Physicians engaging with AI should understand they are ultimately responsible for patient outcomes,” Curtis said. “We really need mandated transparency requirements for developers of clinical AI applications. Physicians must know exactly what these tools can and can’t do.”
Augenstein described transparency requirements as a “layer cake” approach, extending from AI developers down through health systems, clinicians and, ultimately, to patients. Colorado’s AI Act was cited as one model attempting this approach, despite challenges with implementation.
The panelists acknowledged liability as a significant concern for physicians. Without clear legal precedents, Curtis said physicians could face increased malpractice if an AI recommendation leads to poor outcomes.
“The AMA has long held that the person or entity best situated to manage the risks of poor AI performance should carry the liability,” Curtis said. “But right now, the prevailing sentiment is that if a physician engages with AI, they are ultimately responsible for its outcomes.”
Despite their concerns, the panelists still acknowledged AI’s potential to streamline administrative tasks, enhance clinical decision-making and improve patient care — if deployed responsibly.
Kim Horvath, JD, another senior attorney with the AMA, reminded attendees of states’ critical role in shaping future policy: “States move faster and often serve as laboratories for policy solutions. Protecting states’ ability to experiment and innovate is essential.”
Augenstein summed up the panel’s cautious approach: “We need guardrails, because without them, the risks are simply too great.”