
Mindfulness in medicine: On the use of artificial intelligence in physician training
A conversation with a physician-author and nationally known expert on how doctors become master clinicians.
Ronald Epstein, MD, discusses how insights about presence and human connection can inform the use of AI, describing an avatar-based training tool that helps clinicians improve communication around serious illness by providing detailed, objective feedback on language, focus, and responsiveness. He cautions that while AI can be powerful in education and structured tasks, it is poorly suited for clinical gray zones and value-laden decisions, where shared understanding and human judgment remain essential.
Melissa Lucarelli, M.D., FAAFP:Artificial intelligence seems to be on everyone’s mind these days. In the book chapter “Being Present,” you wrote back in 2017 about people’s sense of shared presence and bonding with video game avatars, and you said in the same way that they might connect with the psychotherapist. Well, now that we actually have AI chatbots acting as psychotherapists, how can we use that past research about presence and use that to inform our safer use of AI in mental health care?
Ronald M. Epstein, M.D., FAAHPM: I would extend it beyond mental health care, but first of all, I have to say parenthetically, these days, I’m much more concerned about human intelligence than artificial intelligence. But I’m involved in a project to train clinicians who take care of seriously ill patients how to communicate more effectively about things like prognosis, treatment choices, hospice, palliative care, uncertainty. I mean, these are kind of the big things — death and dying — and to help clinicians. This is a training program for clinicians to help them help their patients navigate those difficult times more effectively. So to do that, we created an on-screen avatar. The prototype was named SOPHIE, and so this is someone you could have a conversation with on-screen. She looked kind of humanoid, acted a little bit robotic, but was able to listen, if you will, to what you said, and respond more or less appropriately. And also, while you were doing this, [it] was monitoring your own response, that is, how fast the physician speaks, whether they use jargon words, whether they forget to address a concern that a patient has brought up, whether they use what we call hedge words, things that make patients not understand what the doctor is quite saying, you know, maybe, possibly this might be associated with such and such. The patient walks out of the office saying, what did the doctor say? And we also know that patients remember on a good day about half of what we say to them, and the rest goes elsewhere. So the avatar is imperfect, but the feedback that the avatar is able to give is stunning. It will say the patient said they’re worried about dying, and you change the conversation here to talk about their cholesterol level. And I’ve heard this, I’ve heard this in real life, because of my research, and it’ll say, here are five ways you could have addressed their concern. Or, you use a lot of words that your reading level, your speech level, is at the level of a graduate student in one of the biomedical sciences. Most patients, you have to tailor their reading level to maybe an eighth-grade level, and here are ways you could do that, and saying, use this word, and these are other words that might have been more understandable. After just a half-hour interaction with SOPHIE and getting feedback, physicians’ performance with a human actor that we train to portray a somewhat similar situation improved substantially, [versus] those who didn’t have that online training. The area of AI that I’m involved in is really in education because standardized patients are expensive, they’re hard to get, once you finish medical school, they’re not usually available, and here’s a way that you can improve your communication skills. It also can track your eye movements, if you’re looking, where you’re looking. I mean, there are all sorts of sophisticated things it can do. And I think it’s in a peculiar way on this tool, because it has you aware of stuff that you would otherwise completely be unaware of. We all think we’re clear, we all think we’re explaining things clearly. But this little bot can tell you, wait a second, no, you’re not. The keyword, the most important word in your discussion, was a word that most people don’t understand.
I’ve seen studies where, for mechanical aspects of health care, like, where is the dialysis suite in the hospital? It will give you really clear directions, which elevator to take and how many yards to walk, and those directions will be better than human directions. But if you type into AI, you know, gee, I’m a 71-year-old male with blood pressure of 120 over 70, and HDL of 95 and an LDL of 66, should I be taking statins? There you get into trouble, because that’s a total gray zone. I picked that example because it’s me and it’s right on the borderline of, I would be recommended for a lipid in the United States, but it wouldn’t be recommended if I was in England or Canada or Switzerland or Norway. It’s kind of this huge gray zone. This is a place where I don’t think AI could really help you very much. I think you need to have a shared conversation with a human who is willing to listen.
Newsletter
Stay informed and empowered with Medical Economics enewsletter, delivering expert insights, financial strategies, practice management tips and technology trends — tailored for today’s physicians.






