News|Articles|January 8, 2026

OpenAI launches ChatGPT Health, directly linking patient portals to the AI chatbot

Fact checked by: Keith A. Reynolds
Listen
0:00 / 0:00

Key Takeaways

  • ChatGPT Health allows patients to sync medical records and wellness data, aiding in understanding and preparation for medical interactions.
  • The platform emphasizes support, not diagnosis, and operates in a secure, separate environment to protect sensitive health information.
SHOW MORE

The new health-focused tab in ChatGPT lets users sync lab results, records and data from wellness apps for tailored answers.

OpenAI, the San Francisco-based artificial intelligence (AI) company behind ChatGPT, is moving deeper into health care, announcing a new product that invites patients to sync their medical records and wellness data directly into the ChatGPT platform.

On Jan. 7, 2026, the company announced the launch of ChatGPT Health, a dedicated space within ChatGPT where users can link patient portals, Apple Health and popular wellness apps, then ask questions grounded in their own lab results, visit summaries and insurance documents.

The rollout comes after OpenAI published new usage data showing just how often people already treat ChatGPT as a health reference. The company says that more than 230 million people around the world ask health or wellness questions each week, and that roughly 40 million users now turn to ChatGPT for health-related queries each day.

For physicians, that means more patients arriving with AI-generated explanations of their test results, draft appeal letters for coverage denials and follow-up questions shaped by a system that now has access to far more of their personal health data.

What ChatGPT Health does

Within the ChatGPT sidebar, “Health” appears as a separate tab with its own chat history and “memories.”

Once users are onboarded, they can choose to connect:

  • Electronic health records, via a partnership with b.well, which aggregates data from roughly 2.2 million U.S. health care providers.
  • Apple Health on iOS devices.
  • Wellness and lifestyle apps including MyFitnessPal, Weight Watchers, Peloton, AllTrails, Instacart and Function.

From there, a typical interaction looks more like a guided health check-in than a visit to Dr. Google. Patients can upload PDF lab reports and ask for a plain-language summary, request help preparing questions for an upcoming visit or compare the pros and cons of different insurance options based on their prior claims and utilization.

OpenAI is explicit about the boundaries. “Health is designed to support, not replace, medical care. It is not intended for diagnosis or treatment. Instead, it helps you navigate everyday questions and understand patterns over time — not just moments of illness — so you can feel more informed and prepared for important medical conversations,” the company wrote in its announcement.

The product’s value proposition is comprehension and preparation, not clinical decision-making.

Early access is launching first to a limited group of users on ChatGPT’s Free, Go, Plus and Pro plans outside of the European Economic Area, Switzerland and the United Kingdom. Medical record integrations and some apps are available only in the United States, and connecting Apple Health requires an iOS device.

OpenAI says it plans to expand access to all users on the web and iOS “in the coming weeks.”

A separate space for sensitive data

OpenAI is pitching ChatGPT Health as a more secure compartment for highly sensitive information.

Health conversations, connected apps and uploaded files live in a sandboxed environment that is separate from the rest of ChatGPT. Health has its own memory store, meaning information shared there is not supposed to feed back into non-health chats. Conversations in Health are encrypted in transit and at rest. OpenAI says Health conversations are not used to train its foundational models.

When a user starts discussing symptoms or medications in the regular ChatGPT interface, the system will now suggest moving the exchange into the designated Health tab for those additional protections.

Fidji Simo, OpenAI’s CEO of applications, framed the tool as a way to help patients navigate complex systems, not replace clinicians.

“AI systems are particularly good at synthesizing large amounts of information, which makes them well suited to helping doctors consider full medical histories, overlapping conditions, medications and risk factors all together,” Simo wrote in a post on Substack. “Time constraints and language barriers mean doctors can’t always fully explain what they’re seeing and why they’re recommending a particular course of treatment … AI can translate that information into plain language and has infinite time for follow-up questions and [explanations].”

At the same time, OpenAI acknowledges that its protections have boundaries. Like other data stored in the cloud, information in ChatGPT Health could still be obtained through a subpoena or court order.

When asked if the product is HIPAA compliant during a briefing earlier this week, OpenAI’s head of health, Nate Gross, said that “in the case of consumer products, HIPAA doesn’t apply in this setting — it applies toward clinical or professional health care settings,” The Verge reports.

Built with physicians

OpenAI is also leaning heavily on physician involvement to distinguish ChatGPT Health from earlier, more generic uses of the chatbot in medicine.

According to the announcement, the company has worked with more than 260 physicians over the past two years who have practiced in 60 countries across dozens of specialties. Those physicians have reviewed and scored model outputs more than 600,000 times across 30 health domains, feedback OpenAI says helped train the underlying model to prioritize safety, clarity and appropriate escalation.

The company’s physician network also contributed to the design of HealthBench, a new assessment framework which evaluates responses based on physician-written rubrics, rather than relying on exam-style questions. OpenAI says the model that powers Health is tuned to, for example, explain lab values in accessible language, flag warning signs in data from wearables and wellness apps that warrant urgent in-person care and summarize care instructions after a visit.

The launch comes amid ongoing scrutiny of general-purpose AI chatbots in health contexts. OpenAI recently disclosed that more than a million ChatGPT users each week send messages that include explicit indicators of suicidal planning or intent and estimates that 560,000 weekly users now show possible signs of psychosis or mania.

The company faces at least one wrongful-death lawsuit from a family who allege that ChatGPT encouraged their son to take his own life in 2025, and regulators continue investigating how AI chatbots handle particularly vulnerable users.

Several states, including Illinois and Nevada, have passed laws limiting the use of AI for mental health therapy or requiring special safeguards and disclosures for AI “companions.”

OpenAI’s own report on health usage emphasizes that most health-related conversations happen when clinics are closed, often from patients deciding whether to wait for an appointment or go to the emergency department.

How this could translate to the exam room

For physicians, ChatGPT Health adds another layer to the AI landscape that is already changing clinical workflows.

An American Medical Association (AMA) survey published in February 2025 found that 66% of physicians reported using some form of AI in practice during the year prior, up from 38% in 2023, with many citing documentation, administrative work and patient education as primary uses.

ChatGPT Health is aimed at the other side of the relationship — patients who need help deciphering their portal or making sense of the intricacies of health insurance. One in four of ChatGPT’s more than 800 million regular weekly users ask at least one health-related question in a given week, and more than 5% of all messages are health-related.

In practice, physicians could see:

  • Patients arriving with ChatGPT-generated summaries of lab trends, seeking validation.
  • Families using the tool to draft appeal letters after coverage denials.
  • People in rural hospital deserts seeking advice about whether they should drive hours for care.

The quality of those outputs will vary, as AI-quality goes, and physicians will still need to review primary data rather than relying on an AI-written summary, but the questions patients bring into the room may sound different when an AI assistant has already walked them through the basics.

Newsletter

Stay informed and empowered with Medical Economics enewsletter, delivering expert insights, financial strategies, practice management tips and technology trends — tailored for today’s physicians.