Commentary|Videos|October 24, 2025

Defining AI liability: A conversation with David A. Simon, J.D., LL.M., Ph.D.

Fact checked by: Keith A. Reynolds

How will artificial intelligence reshape the rules of medical malpractice? Northeastern University’s David Simon unpacks the legal, ethical and practical dilemmas now confronting physicians, hospitals and AI developers.

As artificial intelligence (AI) moves deeper into clinical care, the question of liability is no longer theoretical — it’s urgent.

Medical Economics sat down with David A. Simon, J.D., LL.M., Ph.D., associate professor of law at Northeastern University School of Law, whose research spans health law, torts and technology regulation.

Simon explores how AI could redefine the standard of care, shift responsibility between physicians and manufacturers, and test the boundaries of malpractice and product liability law. He also discusses the risks of physician “de-skilling,” the credibility of AI in courtrooms, and the role regulators should play in balancing innovation with accountability.

From early analogies to self-driving car cases to the first generation of AI-assisted diagnostic tools, Simon offers a candid look at the legal gray zones now forming around clinical technology — and what physicians can do today to protect themselves.

This conversation accompanies Medical Economics' in-depth story, "The new malpractice frontier: Who's liable when AI gets it wrong?"

Newsletter

Stay informed and empowered with Medical Economics enewsletter, delivering expert insights, financial strategies, practice management tips and technology trends — tailored for today’s physicians.


Latest CME