Commentary|Videos|October 22, 2025

Defining AI liability: Why accountability in the AI era is still a gray zone

Fact checked by: Keith A. Reynolds

David Simon, J.D., LL.M., Ph.D., examines the unresolved gray areas between physicians, hospitals and AI manufacturers when errors occur.

David Simon, J.D., Ph.D., L.L.M., associate professor of law at Northeastern University, unpacks the complex web of responsibility that emerges when artificial intelligence (AI) tools are integrated into patient care. From device makers and hospital systems to frontline physicians, Simon says every link in the chain carries potential exposure — yet the boundaries remain murky.

“The gray areas are everywhere,” Simon explains. “Manufacturers, hospitals, and physicians all have roles to play in validating, adopting, and safely using AI — but right now, it’s unclear how liability will be shared when things go wrong.”

He urges clinicians and health systems to ask critical questions about validation data, market authorization and contractual risk-sharing — before AI enters the clinical workflow.

Newsletter

Stay informed and empowered with Medical Economics enewsletter, delivering expert insights, financial strategies, practice management tips and technology trends — tailored for today’s physicians.


Latest CME