• Revenue Cycle Management
  • COVID-19
  • Reimbursement
  • Diabetes Awareness Month
  • Risk Management
  • Patient Retention
  • Staffing
  • Medical Economics® 100th Anniversary
  • Coding and documentation
  • Business of Endocrinology
  • Telehealth
  • Physicians Financial News
  • Cybersecurity
  • Cardiovascular Clinical Consult
  • Locum Tenens, brought to you by LocumLife®
  • Weight Management
  • Business of Women's Health
  • Practice Efficiency
  • Finance and Wealth
  • EHRs
  • Remote Patient Monitoring
  • Sponsored Webinars
  • Medical Technology
  • Billing and collections
  • Acute Pain Management
  • Exclusive Content
  • Value-based Care
  • Business of Pediatrics
  • Concierge Medicine 2.0 by Castle Connolly Private Health Partners
  • Practice Growth
  • Concierge Medicine
  • Business of Cardiology
  • Implementing the Topcon Ocular Telehealth Platform
  • Malpractice
  • Influenza
  • Sexual Health
  • Chronic Conditions
  • Technology
  • Legal and Policy
  • Money
  • Opinion
  • Vaccines
  • Practice Management
  • Patient Relations
  • Careers

AMA issues principles for guiding AI development and use


Statement calls for government policies to mitigate technology’s risks in health care

Doctor holding AI hologram ©


Amid the rapid growth of what it calls “augmented intelligence” (AI) in health care, the American Medical Association (AMA) has issued a set of principles to channel the technology’s development and use in ways that maximize its benefits and minimize potential harm to patients and clinicians.

“The AMA recognizes the immense potential of health care AI in enhancing diagnostic accuracy, treatment outcomes, and patient care,” AMA President Jesse M. Ehrenfeld, M.D., M.P.H. said in a statement. “However, this transformative power comes with ethical considerations and potential risks that demand a proactive and principled approach to the oversight and governance of health care AI.”

Ehrenfeld said the principles will guide the AMA’s discussions with lawmakers and industry stakeholders on policies to regulate the development and use of AI in health care. The areas the principles address are:

  • Oversight: The association encourages a government-wide approach to implement policies to mitigate risks associated with health care AI, while acknowledging that non-government entities have a role in appropriate oversight and governance of health care AI.
  • Transparency: Key characteristics and information regarding the design, development, and deployment of AI processes should be mandated by law where possible, including potential sources of inequity in problem formulation, inputs, and implementation. It calls transparency “essential for the use of AI in health care to establish trust among patients and physicians.”
  • Disclosure and documentation: The statement calls for “appropriate disclosure and documentation when AI directly impacts patient care, access to care, medical decision making, communications, or the medical record.”
  • Generative AI: The statement encourages health care organizations to develop and adopt policies that anticipate and minimize the potential negative effects of generative AI, and to have these policies in place prior to its adoption.

  • Privacy and security: The statement says AI developers have a responsibility to design their systems with privacy in mind, and that developers and health care organizations must implement safeguards to assure patients that their personal information is handled responsibly. Strengthening AI systems against cybersecurity threats is crucial to their reliability, resiliency, and safety.
  • Bias mitigation: To promote equitable health care outcomes and a fair and inclusive health care system, the AMA calls for proactively identifying and mitigating bias in AI algorithms.
  • Liability: The association says it will “continue to advocate to ensure that physician liability for the use of AI-enabled technologies is limited and adheres to current legal approaches to medical liability.”

The statement also urges payors not to use automated decision-making systems in a way that reduces access to needed care or withholds care from specific groups. “Steps should be taken to ensure that these systems are not overriding clinical judgement and do not eliminate human review of individual circumstances,” it says.

Related Videos
Jennifer N. Lee, MD, FAAFP
© National Institute for Occupational Safety and Health
© National Institute for Occupational Safety and Health
© National Institute for Occupational Safety and Health