
Defining AI liability: Could AI serve as evidence in court?
David Simon, J.D., LL.M., Ph.D., explains why AI outputs are unlikely to define medical negligence — unless courts first determine that the AI itself represents the standard of care.
David Simon, J.D., Ph.D., L.L.M., associate professor of law at Northeastern University, weighs in on whether
For now, Simon says, that scenario remains unlikely. AI-generated insights or recommendations wouldn’t hold legal weight unless the technology itself had already been recognized as the standard of care — a high bar that the field hasn’t yet reached.
“I don’t think there’s going to be a case where a plaintiff says whatever the AI says is the standard of care,” Simon explains. “The question is what a reasonable physician would have done — and that still goes to an actual person, unless that person is using AI.”
Newsletter
Stay informed and empowered with Medical Economics enewsletter, delivering expert insights, financial strategies, practice management tips and technology trends — tailored for today’s physicians.