Commentary|Videos|October 23, 2025

Defining AI liability: Who should set the rules for AI in medicine?

Fact checked by: Keith A. Reynolds

David Simon, J.D., LL.M., Ph.D., contrasts the European Union’s proactive approach to AI oversight with the U.S. system’s reliance on courts and the FDA — and argues for a middle path balancing innovation and accountability.

David Simon, J.D., Ph.D., L.L.M., associate professor of law at Northeastern University, examines whether artificial intelligence (AI) regulation in health care should be shaped by policymakers, the U.S. Food and Drug Administration (FDA) or the courts.

Simon contrasts the European Union’s proactive approach — regulating AI heavily before it reaches the market — with the United States’ more reactive stance. He argues for a middle path: stronger premarket scrutiny for medical AI systems, coupled with legal accountability once they’re in widespread use.

“We just don’t know the safety risks of these products until they’ve been used by thousands of people,” Simon says. “So it’s critical to have both front-end regulation and back-end protection through liability.”

Newsletter

Stay informed and empowered with Medical Economics enewsletter, delivering expert insights, financial strategies, practice management tips and technology trends — tailored for today’s physicians.


Latest CME