• Revenue Cycle Management
  • COVID-19
  • Reimbursement
  • Diabetes Awareness Month
  • Risk Management
  • Patient Retention
  • Staffing
  • Medical Economics® 100th Anniversary
  • Coding and documentation
  • Business of Endocrinology
  • Telehealth
  • Physicians Financial News
  • Cybersecurity
  • Cardiovascular Clinical Consult
  • Locum Tenens, brought to you by LocumLife®
  • Weight Management
  • Business of Women's Health
  • Practice Efficiency
  • Finance and Wealth
  • EHRs
  • Remote Patient Monitoring
  • Sponsored Webinars
  • Medical Technology
  • Billing and collections
  • Acute Pain Management
  • Exclusive Content
  • Value-based Care
  • Business of Pediatrics
  • Concierge Medicine 2.0 by Castle Connolly Private Health Partners
  • Practice Growth
  • Concierge Medicine
  • Business of Cardiology
  • Implementing the Topcon Ocular Telehealth Platform
  • Malpractice
  • Influenza
  • Sexual Health
  • Chronic Conditions
  • Technology
  • Legal and Policy
  • Money
  • Opinion
  • Vaccines
  • Practice Management
  • Patient Relations
  • Careers

When AI causes a crisis, you need humanity to restore trust

Commentary
Article

Communication and empathy are keys to overcoming technology-driven disasters

failed AI graphic ©kaptn-stock.adobe.com

©kaptn-stock.adobe.com

Doctors are dedicated to the Hippocratic Oath, often summarized as “first, do no harm.” But technology has taken no such oath.

Major brands are now utilizing and investing in generative and predictive artificial intelligence (AI), publicly backing its potential to reimagine many aspects of the health care experience—from patient charting to diagnoses and imaging analysis. And while it has great value and potential, it’s also not hard to imagine how an error in the data set, design or use of the technology could lead to patient injury and a reputational crisis.

In fact, we’ve already seen wide-scale failures. One AI-powered algorithm, used by hospitals and insurance companies across the country to predict patients in need of “high risk management programs” frequently overlooked Black patients, according to a research study published in Science.The model conflated an individual’s health care needs with their health care spending.

Two large research studies – published in the British Medical Journal and Nature Machine Intelligencesimilarly reviewed hundreds of AI tools developed to diagnose COVID and triage patients. Their conclusion: Out of more than 600 models, none were found to be accurate enough for clinical use. (Many had already been utilized in hospitals and health systems throughout the pandemic.)

Even the World Health Organization warned this year about the lack of proper caution accompanying the “precipitous adoption of untested [AI] systems” in health care — and the errors and patient harm that could result.

While AI offers powerful possibilities for health care organizations, it’s only a matter of time before the technology causes a crisis — whether a privacy breach, patient injury or system-wide error.

And when it does, leaders must be prepared with a communications plan that can bring humanity into a technological crisis and adapt to the situation’s unique challenges. “AI made a mistake” is not an explanation that that will foster trust or uphold a reputation.

Leaders of health care organizations will need to consider:

  • The speed and scale with which an AI-powered crisis can unfold. Organizations will probably have little warning of a potential crisis, limiting the ability to draft statements and contingency plans. And unlike a human medical mistake, which would likely affect a small number of patients, an error in an AI tool could immediately impact health systems all over the country. Similar to cybersecurity protocols, leaders should consult with technology vendors in advance and have a strategy in place to immediately address and contain technological damage
  • You may never know what happened—and you need a communications plan that can handle that uncertainty. Generative AI is currently a “black box” that can analyze data and make predictions but can’t provide its reasoning. So if a radiology tool misread an image and made the wrong diagnosis, you probably won’t know how it arrived at that conclusion.

It’s also hard to promise that you have fixed something when you don’t know what went wrong.

While it’s critical to communicate transparency and urgency in the face of a crisis, the opaque nature of AI will make it difficult to quickly share initial facts or provide a follow-up report. You may also be limited in what you can share by non-disclosure agreements, which some health care organizations are beginning to sign with technology vendors.

Without complete information and the ability to analyze what went wrong, the public and media attention will focus even more heavily on the actions you take to help those affected. An executive who can quickly and genuinely communicate a plan to address the wrongs will demonstrate humility, empathy and decisive leadership—and begin to earn back trust and brand reputation.

  • There’s no tolerance for technological error. People have a certain amount of understanding for human error, but a patient injury or privacy breach caused by AI is likely to elicit a much different response from patients, staff and the public.

Any crisis response must consider the fact that in today’s environment, patients already feel vulnerable to technology. According to a Pew research study, 75% say their top concern about AI in health care is that providers “will move too fast” implementing new solutions “before fully understanding the risks for patients.”

In other words: No one wants the computer to be in charge. If a crisis occurs, you need to show that a human is still minding the store.

In the face of impersonal technology, it’s humanity that builds trust. Leaders must address the situation personally, with responsibility and compassion, to counterbalance the role of AI. Technology can’t apologize, provide restitution or be sued, so you must be ready to step forward with care, concern and a plan to make it right. 

Remember, while legal liability may not be clear, your organization is already on trial in the court of public opinion.Identifying the players at fault may be nuanced, as it likely depends on the information that can be gleaned from the AI tool and whether errors can be traced to a user or developer.

Still, you should be prepared for the public to hold your organization responsible for choosing to use the technology and for being the site of the injury. While you don’t want to own something that may not be your fault, it is critical to acknowledge the impact of the harm caused to your patients and immediately take action on their behalf.

AI is already profoundly changing health care. And the organizations that will lead the way are those ready both to harness the innovations AI enables and protect their reputations and people if something goes wrong.

Brian Tierney is the CEO of Brian Communications and former publisher of The Philadelphia Inquirer, Daily News and Philly.com.

Related Videos