Banner

News

Article

Experts urge safe implementation of AI in health care through new guidance

Author(s):

Key Takeaways

  • AI in healthcare requires robust governance, rigorous testing, and clinician training to ensure safety and effectiveness without introducing new risks.
  • Multidisciplinary governance committees should oversee AI implementation, review applications, and ensure adherence to safety protocols.
SHOW MORE

Guidance takes a pragmatic approach to managing AI systems in clinical practice

Clinical AI needs guidelines: ©Phonlamaiphoto - stock.adobe.com

Clinical AI needs guidelines: ©Phonlamaiphoto - stock.adobe.com

As artificial intelligence becomes increasingly prevalent in health care, organizations and clinicians must adopt measures to ensure its safe implementation in real-world settings, according to Dean Sittig, PhD, of UTHealth Houston, and Hardeep Singh, MD, MPH, of Baylor College of Medicine. They authored guidelines that offer a pragmatic approach to managing AI systems in clinical practice. The guidelines were published in the Journal of the American Medical Association.

“We often hear about the need for AI to be built safely, but not about how to use it safely in health care settings,” Sittig said in a statement. “It is a tool that has the potential to revolutionize medical care, but without safeguards, AI could generate false or misleading outputs that could harm patients.”

The recommendations are based on expert opinions, literature reviews, and lessons from the safe use of health IT. Sittig and Singh emphasize the importance of robust governance, rigorous testing, and clinician training to ensure AI systems enhance safety and outcomes without introducing new risks.

“Health care delivery organizations will need to implement robust governance systems and testing processes locally to ensure safe AI and safe use of AI,” Singh said in a statement. “All health care delivery organizations should check out these recommendations and start proactively preparing for AI now.”

Key recommendations

Sittig and Singh’s framework includes several critical steps:

  • Rigorous Testing: Conduct real-world testing of AI tools to confirm their safety and effectiveness before deployment.
  • Governance Committees: Establish multidisciplinary committees to oversee AI implementation, review new applications, and ensure adherence to safety protocols.
  • Clinician Training: Provide formal training on AI risks and safe use. Transparency with patients about AI’s role in their care is crucial for building trust.
  • System Inventories and Monitoring: Maintain detailed records of AI systems and perform regular evaluations to identify and mitigate risks.
  • Fail-Safe Protocols: Develop procedures to deactivate malfunctioning AI systems and transition seamlessly to manual processes when needed.

The authors stressed the importance of collaboration between health care providers, AI developers, and electronic health record vendors to protect patients and ensure AI’s safe integration into clinical care.

“By working together, we can build trust and promote the safe adoption of AI in health care,” Sittig said.

Related Videos
Dermasensor
Kyle Zebley headshot
Kyle Zebley headshot
Kyle Zebley headshot
Michael J. Barry, MD