Blog|Articles|November 7, 2025

The 5-pillar framework for AI compliance in your practice

Fact checked by: Todd Shryock
Listen
0:00 / 0:00

Key Takeaways

  • AI adoption in healthcare introduces privacy, security, and trust risks, necessitating regulatory measures like the ONC's HTI-1 Final Rule and HHS's 2025 AI Strategic Plan.
  • Providers must ensure AI tools are HIPAA-compliant, with Business Associate Agreements and transparency in algorithmic data to prevent bias.
SHOW MORE

Explore essential strategies for AI compliance in medical practices, ensuring patient safety, privacy, and trust while navigating new regulations and technologies.

Health care has never been more connected. Or more vulnerable. While artificial intelligence is rapidly reshaping health care from medical documentation to patient communication, its adoption carries significant, often overlooked, risks to privacy, security, and trust. This is the exact challenge the ONC's HTI-1 Final Rule is designed to address.

Providers using certified health IT now face new transparency rights and added responsibilities. In parallel, the HHS released its 2025 AI Strategic Plan, calling for “Trustworthy AI” and stronger federal oversight.

Accountability isn’t just a federal concern anymore. In California, for example, AB 489 limits how AI can be presented in health care, banning systems from using titles or language that suggest they’re licensed professionals. Innovation is still welcome. But now, it has to answer to the people it serves.

We've all heard rumors of providers discreetly using consumer-oriented chatbots, unaware that, as the HIPAA Journal clarifies, tools like ChatGPT are not HIPAA-compliant. What begins as an attempt to save time can quickly become a compliance nightmare.

Arguably, technological innovation consistently outpaces regulation. In health care, this lag is a direct threat to patient safety and professional credibility. To bridge this divide, I’m sharing a five-pillar framework to guide practices in safely and effectively integrating technology.

1. Get the BAA and actually read it

Many AI vendors use the phrase "HIPAA-compliant" as a selling point, but it remains crucial to implement a Business Associate Agreement (BAA) to ensure that each party has a clear understanding of their obligations under HIPAA and that the necessary security protocols are in place. A BAA is the contract that holds a vendor accountable for appropriately handling protected health information (PHI) in compliance with HIPAA. Without it, your practice could be fully liable for HIPAA violations and breaches.

But don't just stop at getting the BAA. To get the most out of your BAA, ensure you read it thoroughly.

Here are two critical, often-overlooked details:

  • Downstream Subcontractors: Does the BAA cover the vendor's vendors? If your AI vendor uses a cloud provider (like AWS or Azure) to process data, a BAA must be in place with that "downstream" subcontractor. If not, your data may be exposed.
  • Vague Breach Timelines: Your agreement might use ambiguous wording for breach notifications, such as “promptly notify.” This language is subjective and doesn't guarantee you'll be alerted within a specific window. Implement a specific timeline, such as "within ____ days of discovery." That way, you can meet your own notification deadlines.

BAAs are not only a regulatory requirement but also instrumental in demonstrating a vendor’s commitment to safeguarding PHI. Choose your vendors with care!

2. Demand the "AI nutrition label"

Thanks to the ONC HTI-1 Final Rule, the game has changed. For AI tools integrated into certified health IT, developers must now provide "algorithm transparency" information. Think of it as an "AI nutrition label" or "model card" for your software tools.

You can ask your vendors for:

  • Source Data: What was the origin of the model's training data? Confirm if it was diverse and representative to avoid built-in biases.
  • Demographic Efficacy: How does the tool perform specifically for your patient populations?
  • Application Scope: What is the tool's exact, intended function, and what are its documented limitations?

This data you are entitled to, and it's essential for a new, critical step: vetting for bias.

3. Define your "red line" and vet for bias

A successful practice must know where to draw the line: Automation assists, but human judgment decides.

On one hand, AI is a game-changing tool for administrative tasks like scheduling, claims, and paperwork that often cause burnout. On the other hand, it's a serious risk in clinical decision-making. When an algorithm starts interpreting symptoms or imitating a physician’s reasoning, it ceases to be a supporting tool and crosses that line.

Risk is also emerging in administrative tools in the form of algorithmic bias. An AI model trained on historical data can easily pick up prior unintentional biases. A scheduling bot could begin de-prioritizing patients with certain insurance plans. This means an AI clinical tool trained on biased data might also start recommending insufficient pain medication for specific patient demographics.

Therefore, your policy must define the role of AI in your practice. For example: “AI may be used as a Clinical Decision Support (CDS) tool to suggest potential diagnoses or summarize findings based on EHR data. However, the final diagnosis and treatment plan are the sole responsibility of the licensed clinician.” This policy confirms AI is a tool, not a replacement for clinical judgment, and it reinforces the “Human-in-the-Loop” principle (see Pillar #5 below). The policy should also require a clear process to vet new tools for potential bias before deployment.

4. Move from passive disclosure to informed consent

Too often, practices treat an AI disclosure as a legal formality. Adding "AI is not a doctor" to a chatbot does little to build trust.

Transparency is your greatest asset. A simple statement like, “We use an AI assistant to accurately transcribe our discussions so I can give you my full, undivided attention,” transforms a new technology from a point of skepticism into a moment of trust.

For any tool that actively “listens” or processes clinical conversations, you must go one step further and obtain explicit, opt-in consent. This can be as simple as asking:

“To improve the accuracy of your medical record, our provider may use an AI-powered scribe to transcribe this visit. Do you consent to the use of this tool?”

Such an approach keeps patients well-informed, demonstrates your practice’s commitment to ethics, and ensures compliance with regulatory requirements.

5. Build a "human-in-the-loop" governance framework

Compliance depends on accountability. It’s more difficult to manage your AI tools if you don’t understand how data moves through them.

Your framework must be built on two pillars:

  1. Unchangeable Audit Trails: Every AI system that touches PHI must provide logs of who accessed data, when, and why. These logs are your indisputable proof of diligence in an audit.
  2. Human-in-the-Loop (HITL) Oversight: This is the industry-standard term for what we used to call "human supervision."

A formal, required workflow is non-negotiable. A licensed professional must review and validate all AI-generated output that affects patient care, including visit summaries and billing codes. This establishes technology as a useful tool. Furthermore, this framework defines clear roles, AI's limitations, data-handling protocols, and the precise methods for safe and effective AI integration.

Your goal: A "FAVES" practice

The AI landscape is moving fast, but the path to responsible adoption is clear. The HHS is pushing for AI that is Fair, Appropriate, Valid, Effective, and Safe (FAVES). By building a governance framework on these five pillars, you build a practice that is on the path to compliance, and that is safer, more efficient, and more worthy of the trust your patients place in you.

Mohammad Dabiri is RXNT's Director of Engineering, AI. Mohammad is a seasoned entrepreneur and intrapreneur with over one and a half decades of driving innovation and growing companies and organizations from inception to growth stages, including founding and growing 2 deep-tech companies. He has the experience of introducing groundbreaking AI-centered products challenging the state of the art across finance, medical, and industrial automation industries.

Newsletter

Stay informed and empowered with Medical Economics enewsletter, delivering expert insights, financial strategies, practice management tips and technology trends — tailored for today’s physicians.


Latest CME