
Defining AI liability: Will juries trust AI over physicians?
David Simon, J.D., LL.M., Ph.D., explores how public skepticism toward “robot diagnosis” could shape future court decisions, even as confidence in AI’s accuracy grows over time.
David Simon, J.D., Ph.D., L.L.M., associate professor of law at Northeastern University, explores how public perception — and courtroom skepticism — could shape the future of
Even if AI systems outperform physicians on diagnostic accuracy, Simon notes that juries and patients alike remain wary of “robot-generated” care.
“People are very skeptical,” he says. “If you give them the choice between a robot that’s 95% accurate and a human that’s 70% accurate, most will still pick the human.”
That instinct for human judgment, he explains, could influence how courts assess the credibility of AI recommendations and who bears responsibility when machine-made decisions go wrong.
Newsletter
Stay informed and empowered with Medical Economics enewsletter, delivering expert insights, financial strategies, practice management tips and technology trends — tailored for today’s physicians.