• Revenue Cycle Management
  • COVID-19
  • Reimbursement
  • Diabetes Awareness Month
  • Risk Management
  • Patient Retention
  • Staffing
  • Medical Economics® 100th Anniversary
  • Coding and documentation
  • Business of Endocrinology
  • Telehealth
  • Physicians Financial News
  • Cybersecurity
  • Cardiovascular Clinical Consult
  • Locum Tenens, brought to you by LocumLife®
  • Weight Management
  • Business of Women's Health
  • Practice Efficiency
  • Finance and Wealth
  • EHRs
  • Remote Patient Monitoring
  • Sponsored Webinars
  • Medical Technology
  • Billing and collections
  • Acute Pain Management
  • Exclusive Content
  • Value-based Care
  • Business of Pediatrics
  • Concierge Medicine 2.0 by Castle Connolly Private Health Partners
  • Practice Growth
  • Concierge Medicine
  • Business of Cardiology
  • Implementing the Topcon Ocular Telehealth Platform
  • Malpractice
  • Influenza
  • Sexual Health
  • Chronic Conditions
  • Technology
  • Legal and Policy
  • Money
  • Opinion
  • Vaccines
  • Practice Management
  • Patient Relations
  • Careers

What’s stopping ChatGPT from transforming health care?

Article

Physicians will only be able to trust artificial intelligence when it's transparent.

What’s stopping ChatGPT from transforming health care?

Michal Tzuchman-Katz, MD
Kahun Medical

As ChatGPT dominates discussions about potential of artificial intelligence (AI) to disrupt entire industries, an Australian doctor nervously recounted how the model supposedly “diagnosed his patient in seconds,” the Daily Mail reported. Of course, this scenario amounts to an exception rather than the rule, considering ChatGPT, created by developer OpenAI, isn’t designed for industrial uses that require that level of precision – let alone medically diagnosing patients.

The chatbot’s impressive success does, nevertheless, raise questions about AI’s involvement in health care.

Can a ChatGPT-esque model be used to the benefit of physicians and patients?

In a sense, yes. ChatGPT’s two most pronounced breakthroughs in relation to health care will be in:

  1. Disrupting the way we access knowledge. ChatGPT will become the one-stop-shop physicians will use to efficiently seek answers to questions for which they would otherwise need to search Google or other curated knowledge websites.
  2. Its phenomenal fluency and competent prose that can help efficiently communicate any thought to any audience – whether patients, insurance companies, or fellow colleagues.

Social media is exploding with tips by physicians on how to use ChatGTP. Some examples include sending prescription instructions to patients, tapering down medication instructions, constructing a letter to insurance companies requesting approval for a medication or procedure, and writing the initial outline and abstract of a scientific paper. The list goes on, and we are only at the beginning of this historic pivot.

Where does it still fall short?

The real question is whether ChatGPT can think clinically about a patient. Can OpenAI’s model perform clinical reasoning in an evidence-based manner that can assist physicians in decision making?

That’s where it gets tricky.

One of the biggest hurdles ChatGPT faces – in health care and other sectors – is that it is built with the “black-box” machine learning approach, meaning it offers zero transparency into how the model produces its output. This majorly stymies the potential, for example, for writers and researchers to leverage ChatGPT beyond ideation, outlines, and short paragraphs because the model doesn’t trace back to its originating sources.

It’s not that ChatGPT or black-box AI more broadly were created with the purpose of deliberately creating mystery surrounding their decisions. Rather, it’s an implication of the methods through which the software is developed. Many black-box methods of creating the health care-geared AI models that power chatbots and clinical-intake tools produce their output by comparing each specific case with the countless patient records in their databases. In doing so, they in effect base their algorithmic decisions on big data, making it impossible to reason through or reference their decisions to a specific medical source.

We have all become accustomed to a hit-or-miss AI that produces output almost magically. It gets some things right and others wrong, but never explains its reasoning or references back to its sources.

Building effective explainable AI

What will it take for physicians to gain trust and adopt an AI based technology into their practices? Building explainable AI (XAI) starts with the data on which we train our models. Companies need to have transparency and explainability in mind early in their journeys to start off with the appropriate data – data on which the users of the intended software understand and already rely.

In the case of health care software that is aimed to work side-by-side with providers, that means data from peer-reviewed, high-quality medical literature. Standardized care based on reliable evidence is the key to high-quality care. AI systems built on that principle rely not only on the quantity of data they use, but also on the ability of the models to understand the content of these sources and apply it intelligently where needed in real time.

There are several ways in which XAI could benefit physicians when it comes to clinical reasoning tools:

  • Improved trust and confidence: By providing physicians with insights that are fully explainable and referenced to the same trusted sources they use, XAI can help to build trust and confidence among physicians. This can make physicians more likely to use these tools and can help to ensure that they are used effectively.
  • Reduced bias and standardized care: Every physician is limited by their own bias and blind spots. By providing physicians with a trusted tool that consults with all the relevant medical literature, XAI covers physicians’ blind spots and ensures a basic standardization of care for all patients.
  • Improved efficiency: By providing a clearer understanding of how it arrives at its output and earning physicians’ trust, XAI expedites patient visits, leaving more time for building treatment plans and relieving bottlenecks.

When built in a transparent and explainable way, AI offers tremendous potential for improving the clinical-reasoning process, ensuring high-quality care while also making physicians’ lives easier. The black-box approach hinders the ability to develop models that win physicians’ trust, and for good reason. It’s time to steer the AI ship in the explainable direction. Until then, ChatGPT-like tools will be used mostly for the more administrative part of health care.

Michal Tzuchman-Katz, MD, is a cofounder and chief medical officer at Kahun Medical, a company that built an evidence-based clinical reasoning tool for physicians. Before cofounding Kahun in 2018, she worked as a pediatrician at Clalit Health Services, Israel’s largest HMO. She practices emergency medicine in Pediatric ED at Ichilov Sourasky Medical Center, where she also completed her residency. Additionally, she has a background in software engineering and led a tech development team at Live Person.

Related Videos
© drsampsondavis.com
© drsampsondavis.com
© drsampsondavis.com
© drsampsondavis.com
Mike Bannon ©CSG Partners