News|Articles|April 23, 2026

AMA presses Congress for guardrails on AI chatbots

Fact checked by: Keith A. Reynolds
Listen
0:00 / 0:00

Key Takeaways

  • Federal policy priorities include mandatory AI identity disclosures, disclosure of clinician oversight, and prohibitions against portraying chatbots as licensed clinicians, with FTC enforcement for deceptive practices.
  • Statutory limits would restrict mental health diagnosis/treatment by chatbots, triggering FDA review when outputs function as clinical care rather than general information.
SHOW MORE

The physician group is pushing for disclosure rules, FDA review, crisis-detection mandates and advertising limits on tools patients increasingly use for mental health support.

The American Medical Association (AMA) is asking Congress to put firmer rules around artificial intelligence (AI) chatbots used in mental health care, warning that patients are leaning on these tools without the safeguards that typically govern clinical interactions.

In letters sent April 22 to the co-chairs of the House and Senate AI caucuses and the Congressional Digital Health Caucus, AMA CEO John Whyte, M.D., M.P.H., laid out a set of safeguards the group wants written into federal policy — ranging from mandatory disclosures that users are talking to a machine to Food and Drug Administration (FDA) review for any chatbot that crosses into diagnosis or treatment of mental health conditions.

"AI-enabled tools may help expand access to mental health resources and support innovation in health care delivery, but they lack consistent safeguards against serious risks, including emotional dependency, misinformation and inadequate crisis response," Whyte wrote. "With thoughtful oversight and accountability, policymakers can support innovation and ensure technologies prioritize patient safety, strengthen public trust and responsibly complement — not replace — clinical care."

The letters arrived the same day OpenAI announced a free version of ChatGPT aimed at verified U.S. clinicians, a reminder of how quickly these tools are moving from consumer apps into clinical workflows. A 2026 AMA survey cited in that announcement found 72% of physicians now use AI in clinical practice, up from 48% a year earlier.

What the AMA is asking for

The letters cluster the group's recommendations into five areas.

On transparency, the AMA wants chatbots to clearly disclose that they are machines and whether any clinician is overseeing the interaction. It would bar any AI tool from holding itself out as a licensed clinician and give the Federal Trade Commission (FTC) authority to penalize deceptive practices under its existing unfair-or-deceptive-acts power.

On regulation, the AMA wants Congress to set statutory limits that prohibit chatbots from diagnosing or treating mental health conditions, with FDA review triggered when a tool crosses that line. The group argued that existing oversight frameworks, written for static products and traditional medical devices, cannot account for generative AI that can shift from casual conversation to quasi-therapeutic advice inside a single session. A simple disclaimer, the AMA said, should not be enough to escape regulatory review.

The letters also call for mandatory crisis-detection systems that identify suicidal ideation and self-harm risk, hand users off to appropriate resources and use de-escalation language rather than pulling vulnerable users deeper into conversation. Developers should face post-deployment safety monitoring and mandated reporting of serious incidents, the AMA said, with stronger scrutiny for tools used by children and adolescents.

On advertising, the AMA wants commercial messaging discouraged inside mental health chatbots and banned outright when the user is a minor. Outputs, it said, should be free of sponsorship bias, and operators should not share data with third-party trackers.

And on privacy, the group is asking for meaningful limits on retention of sensitive conversations, clear consent for how data is used, and safeguards against agentic AI systems that can act across a user's connected accounts, such as calendars or email.

A widening evidence gap

The AMA's concerns about reliability are showing up in the peer-reviewed literature, too.

A study published April 13 in JAMA Network Open tested 21 frontier large language models on 29 standardized patient vignettes. The models arrived at a correct final diagnosis more than 90% of the time but failed to generate an appropriate differential diagnosis more than 80% of the time. Corresponding author Marc Succi, M.D., executive director of the MESH Incubator at Mass General Brigham, said that gap is where the real safety risk sits.

"I would be comfortable using artificial intelligence for low-risk, high-feasibility tasks. That includes things like ambient documentation, visit notes, summarization, patient-friendly explanations and billing," Succi told Medical Economics. "Once you start moving into higher-risk uses — clinical decision support, responding to patient messages, ordering lab tests, renewing medications, psychiatric medications — that is where you need to stop and look at the level of performance very critically and in multiple ways, not just trust what the vendors say."

What clinicians are already seeing

For primary care, the AMA's concerns about emotional dependency often surface in small ways at the exam room door. Patients are arriving with chatbot-generated interpretations of lab results, symptom explanations and care plans, and the productive move, some experts say, is usually to engage rather than dismiss.

"The most productive path would not be to dismiss the effort," said Amber Maraccini, Ph.D., M.A., who leads the health care and life sciences practice at experience software firm Medallia. "Instead it's looking at the information and say, let's look at this together."

Maraccini said clinicians should also watch for comments that suggest a patient is substituting AI for the relationship, such as saying it feels easier or more empathetic to talk to a chatbot than to their physician. Those, she said, are usually signals about the clinician-patient relationship rather than about the technology.

The AMA framed its recommendations as a starting point rather than a ceiling, noting that updated protections will likely be needed as the underlying technology evolves. The group said it remains committed to working with Congress on the issue.

Newsletter

Stay informed and empowered with Medical Economics enewsletter, delivering expert insights, financial strategies, practice management tips and technology trends — tailored for today’s physicians.