Feature
Article
Medical Economics Journal
Artificial intelligence is entering exam rooms faster than malpractice law can keep up. Here’s what physicians need to know about how to use these tools without inviting avoidable risk.
The new malpractice frontier: Who’s liable when AI gets it wrong?
Artificial intelligence (AI) is no longer spoken of in the future tense. Practices are piloting ambient scribes, payers are embedding predictive models into prior authorization portals, and health systems are rolling out triage and imaging tools.
With each new implementation comes a practical concern for physicians. If AI makes — or fails to make — a critical call, who is responsible?
Health systems? AI developers? Physicians themselves? For now, the law offers more questions than answers.
Deepika Srivastava, chief operating officer at The Doctors Company.
“So far, there has been no documented malpractice case in the U.S. where AI was central to the claim,” said Deepika Srivastava, chief operating officer at The Doctors Company. “Most insurers are reporting only seeing limited AI-specific claims.”
She added that adoption is growing faster than legal frameworks are evolving, and that will force courts and carriers to “handle the risk AI presents.”
That gap leaves physicians balancing two kinds of exposure: being faulted for ignoring a tool their peers use or relying on it too much. As Srivastava put it, “Not using AI could be seen as negligent, while today, relying on it too heavily may be considered careless. It’s a balancing act.”
In May 2024, the American Law Institute approved its first-ever restatement of the law of medical malpractice. As detailed in a February 2025 JAMA special communication by Daniel Aaron, M.D., J.D., associate professor of law at the University of Utah, the restatement represents a shift away from strict reliance on customary practice and toward a more patient-centered concept of reasonable care.
Courts are now invited to weigh evidence-based guidelines and contemporary standards, even when prevailing customs fall short.
The standard of care changes with clinical practice, so in areas where an AI-enabled device or workflow becomes pervasive and demonstrably useful, the expectation of what a “reasonable physician” would do will move with it.
David A. Simon, J.D., LL.M., Ph.D., an associate law professor and expert on health care law and liability.
“It’s very likely that AI will reshape the standard of care,” said David A. Simon, J.D., LL.M., Ph.D., an associate law professor and expert on health care law and liability. “But it’s not going to be one-and-done. It’s going to happen gradually, in certain subspecialties, in fits and starts.”
He noted that whether a given tool is FDA-regulated as a device — and how it came to market — will matter, as will how widely peers actually use it.
For smaller practices, fairness questions will arise if courts equate “standard” with “available to well-resourced systems.”
For now, physicians should watch their specialty’s advocacy societies and major health systems for clear guidance about when use of an AI tool is expected, not optional.
When AI contributes to patient harm, responsibility will rarely lie with a single party.
“From the manufacturer to the hospital system to the doctor…it’s just a lot of unknowns,” Simon said. He compared the emerging situation to self-driving cars, where vehicle manufacturers have faced product liability claims when their autonomous systems malfunctioned.
In medicine, plaintiffs often sue manufacturers under product liability. Think back to 2021, when Philips recalled millions of continuous positive airway pressure and bilevel positive airway pressure devices after foam degradation raised safety concerns for users. Lawsuits were subsequently filed against Philips for defective design and failure to warn. Even though physicians had recommended the devices and managed patients who used them, liability ultimately rested with the manufacturer because of the defect.
Similar strategies can be expected for AI devices cleared through the FDA’s 510(k) or De Novo pathways, especially if the claim alleges a design defect or inadequate warnings. Recent case law is beginning to test those boundaries. In Dickson v. Dexcom Inc. (2024), a Louisiana court considered whether manufacturers could use FDA De Novo authorization to block broad personal injury and products liability claims. Although the case did not directly involve AI, it could have major implications for AI-enabled devices, since many rely on the same regulatory pathway.
Physicians, however, remain squarely on the hook for clinical judgment. Srivastava said she still believes they “bear the biggest risk.” Hospitals and health systems face exposure if they fail to vet and govern tools properly — validation, training, consent workflows and auditing all fall within their duties. Vendor contracts and indemnities can also shift liability long before a case reaches court.
Sara Gerke, an associate professor of law at the University of Illinois Urbana-Champaign.
Sara Gerke, an associate professor of law at the University of Illinois Urbana-Champaign, agreed that as the law currently stands, physicians and hospitals shoulder the burden of liability. Through CLASSICA, a Horizon Europe project, Gerke’s team conducted the first empirical legal study of liability using focus groups with 18 U.S. and EU surgeons. The findings were published in Annals of Surgery Open.
Surgeons generally accept that ultimate responsibility remains theirs, even when using AI. They don’t see AI as part of the current standard of care, but many expect it will be in the future. Most were skeptical that manufacturers would bear significant liability unless there was a clear defect. Some called for shared accountability if surgeons followed AI instructions properly.
Patient consent emerged as another theme. The surgeons felt that patients should be informed when AI is used, particularly if following or rejecting its advice could alter outcomes.
For now, physicians and health systems are left carrying the bulk of liability, while manufacturers operate in a comparatively sheltered legal space.
How much do patients need to know about your use of AI? Legally speaking, the answer is still in flux.
“The current informed consent doctrine does not necessarily impose a duty to disclose the use of AI in most cases,” Gerke said. There are exceptions — if a patient asks directly, or if an AI tool will play a material role in a procedure.
Ethically, she argues for proportionate transparency: the deeper a system’s role in diagnosis or treatment, the more disclosure and consent should be secured.
Srivastava recommends treating informed consent as a process, not paperwork. “Physicians should approach AI with the same diligence as any clinical tool — making documentation and patient communication a priority,” she said.
Regarding ambient listening tools, for example, she acknowledged that although they save time, “their use should be disclosed and consented to.” She recommends that practices have clear protocols for when the tools are used and how they document patient consent. Practices should also have alternatives in place in the event that a patient opts out.
Coverage is beginning to shift, though cautiously. “Some carriers are introducing policy riders for practices or systems that are relying very heavily on AI tools,” Srivastava said. These often limit coverage to FDA-approved uses and exclude experimental features.
For its part, The Doctors Company, a leading provider of malpractice insurance and risk management services, currently has “no exclusion for AI. We would still defend and potentially indemnify a physician if AI played a role in a claim,” Srivastava said.
In Europe, insurers have begun tightening underwriting and governance requirements, particularly around validation, training and human oversight. AI-specific exclusions are still comparatively uncommon, but underwriters have begun to use sublimits, higher deductibles or conditions precedent when tools are unvalidated or used outside of their cleared indications. Some reserve the right to revisit terms after adverse incidents.
While not a direct one-to-one comparison, these overseas practices and trends very well may foreshadow future adjustments to coverage on the U.S. front.
In court, could a plaintiff hold up an AI output as proof of what a physician “should have known”?
Simon is skeptical in the near term: “If AI is going to be admitted [as evidence] as to what the standard of care is, it would have to be the case that the AI is the standard of care.”
He also cautioned against assuming most people will automatically trust machine advice. “People are very skeptical of robot-generated diagnosis and treatment plans,” he said. That skepticism may wane, but for now, juries may still want a human at the center of the story.
Gerke, who has a law degree from the University of Augsburg in Germany, argues for clearer labeling of AI devices, modeled on food labels, to spell out training data, limitations and update processes. The idea is to get physicians the information they need without leaving patients in the dark. That said, under the learned intermediary doctrine, much of the risk would still rest with physicians.
California may also serve as an indicator of where U.S. policy is headed. Assembly Bill 2013 requires disclosures about training data and use cases, aiming to break open the AI black box. Legal analysts suggest it could become a template for other states when it goes into effect on Jan. 1, 2026.
Ultimately, courts will develop doctrine case by case. For physicians, that means watching their specialty societies, state legislatures, and the FDA’s evolving approach to software and postmarket surveillance.
Among experts, a consensus is emerging: AI will become more common, more capable and more embedded in daily care. The malpractice exposure that comes with it won’t vanish, but it can be managed.
“Transparency is generally productive,” Srivastava said. “It builds trust, and it reduces risk when handled thoughtfully.”
Simon is pragmatic, saying physicians should use AI where it helps but know what you’re gaining — and what you might be giving up — before you switch it on.
Gerke hopes for a future in which AI is integrated into the standard of care in ways that are fair to patients and to those who care for them. Getting there, she said, will require better labeling, stronger guidance from medical societies and smarter regulatory frameworks that don’t leave physicians carrying the load alone.
Stay informed and empowered with Medical Economics enewsletter, delivering expert insights, financial strategies, practice management tips and technology trends — tailored for today’s physicians.