• Revenue Cycle Management
  • COVID-19
  • Reimbursement
  • Diabetes Awareness Month
  • Risk Management
  • Patient Retention
  • Staffing
  • Medical Economics® 100th Anniversary
  • Coding and documentation
  • Business of Endocrinology
  • Telehealth
  • Physicians Financial News
  • Cybersecurity
  • Cardiovascular Clinical Consult
  • Locum Tenens, brought to you by LocumLife®
  • Weight Management
  • Business of Women's Health
  • Practice Efficiency
  • Finance and Wealth
  • EHRs
  • Remote Patient Monitoring
  • Sponsored Webinars
  • Medical Technology
  • Billing and collections
  • Acute Pain Management
  • Exclusive Content
  • Value-based Care
  • Business of Pediatrics
  • Concierge Medicine 2.0 by Castle Connolly Private Health Partners
  • Practice Growth
  • Concierge Medicine
  • Business of Cardiology
  • Implementing the Topcon Ocular Telehealth Platform
  • Malpractice
  • Influenza
  • Sexual Health
  • Chronic Conditions
  • Technology
  • Legal and Policy
  • Money
  • Opinion
  • Vaccines
  • Practice Management
  • Patient Relations
  • Careers

ACP: Physicians, not AI, must guide patient care


American College of Physicians publishes position paper on best practices for integrating artificial intelligence into health care.

AI illustration: © The other house - stock.adobe.com

AI illustration: © The other house - stock.adobe.com

While artificial intelligence (AI) may be a useful tool in medicine, doctors must remain the brains behind patient care now and in the future, according to the American College of Physicians (ACP).

A new position paper outlined how physicians and other clinicians should integrate the new technology into health care. ACP prescribed transparency in developing, testing and using AI. Physicians need awareness of equity and bias, and regulators need to make rules for AI.

ACP suggested a dose of skepticism because flawed inputs lead to flawed outputs that can hurt physician decision-making – and ultimately patients. Yet too much suspicion might mean patients and physicians don’t reap the benefits of the new technology.

“A balanced approach to AI technologies is in order,” said “Artificial Intelligence in the Provision of Health Care: An American College of Physicians Policy Position Paper,” published June 4 in Annals of Internal Medicine.

"AI has the potential to aid in solving some of the issues currently plaguing the health care industry, such as clinician shortages, burnout, and administrative burdens," ACP President Isaac O. Opole, MBChB, PhD, MACP, said in an accompanying news release. "However, to ensure that we are able to realize the most benefit, with the fewest harms to patients, we need to fully understand the implications of the technology that we are implementing."

ACP outlined 10 recommendations/position statements for physicians, other clinicians, lawmakers and patients to consider.

First: “ACP firmly believes that AI-enabled technologies should complement and not supplant the logic and decision making of physicians and other clinicians.”

Going forward

Among the ACP’s suggestions:

• AI developers and researchers should prioritize privacy and confidentiality of patient and physician data used for AI development.

• AI and other new technologies must reduce, not worsen, disparities in health care.

• AI is far enough along that the nation needs “a coordinated federal AI strategy, built upon a unified governance framework,” with oversight, enforcement and appropriate reporting of adverse events involving the technology.

• AI should be designed to reduce burdens on physicians and other clinicians in support of patient care.

• Training on AI for physicians must be incorporated in all levels of medical education.

State of the science

AI boomed starting in November 2022, when OpenAI publicly released its ChatGPT program.

“These technologies have various applications throughout the provision of health care, such as clinical documentation, diagnostic image processing, and clinical decision support,” the ACP paper said. “With the growing availability of vast amounts of patient data and unprecedented levels of clinician burnout, the proliferation of these technologies is cautiously welcomed by some physicians.

“Others think it presents challenges to the patient–physician relationship and the professional integrity of physicians,” the ACP paper said. “These dispositions are understandable, given the ‘black box’ nature of some AI models, for which specifications and development methods can be closely guarded or proprietary, along with the relative lagging or absence of appropriate regulatory scrutiny and validation.”

But the concepts and technology were not necessarily new. Applications of AI and machine learning (ML) began in the 1970s. More recently, from January 2020 to October 2023, the U.S. Food and Drug Administration gave its nod to more AI- and ML-enabled tools than in the previous 25 years, the ACP paper said.

Who’s involved

The paper includes a glossary, discussions of terminology and regulatory agencies, and extended rationales for each stance.

The authors are Nadia Daneshvar, JD, MPH; Deepti Pandita, MD; Shari Erickson, MPH; Lois Snyder Sulmasy, JD; and Matthew DeCamp, MD, PhD. The paper was developed with ACP’s Committee’s on Medical Informatics and Ethics, Professionalism and Human Rights, with review approval by the College’s Board of Regents in February 2024.

Recent Videos