Banner

News

Article

Health care workers are leaking patient data through AI tools, cloud apps

Author(s):

Fact checked by:

Key Takeaways

  • Healthcare workers are increasingly using generative AI tools, risking HIPAA violations due to improper safeguards and non-compliant platforms.
  • Netskope's report reveals that 81% of data policy violations in healthcare involve regulated data like PHI, with personal cloud storage apps being misused.
SHOW MORE

Despite rising guardrails, health care workers continue using personal artificial intelligence and cloud apps — often in ways that violate HIPAA and put patients’ trust at risk.

© VZ_Art - stock.adobe.com

© VZ_Art - stock.adobe.com

A growing number of health care workers are turning to generative artificial intelligence (AI) tools like ChatGPT and Google Gemini to lighten their workloads. But many are doing it without the proper safeguards — and in ways that could be violating federal privacy laws.

That warning comes from Netskope’s 2025 Threat Labs Healthcare report, which analyzed cloud app usage, AI trends and data violations across health systems. It found sensitive patient data — including protected health information (PHI) — is frequently uploaded to tools and platforms that aren’t HIPAA-compliant, putting both patients and clinicians at risk.

“Beyond financial consequences, breaches erode patient trust and damage organizational credibility with vendors and partners,” says Ray Canzanese, director of Netskope Threat Labs.

The risks of convenience

The health care industry embraced AI faster than many other industries, and most organizations now use at least some form of generative AI to streamline operations or reduce administrative load. According to Netskope’s data:

  • 88% of health care organizations use cloud-based genAI tools
  • 98% use apps that incorporate genAI features
  • 96% rely on tools that train on user data
  • 43% are experimenting with running AI systems in-house

At the same time, 71% of health care workers are still using personal AI accounts for work — down from 87% the year before, but still worryingly high. Since most public AI tools like ChatGPT and Gemini do not sign business associate agreements (BAAs) or meet HIPAA compliance standards, using them with PHI is a potential violation.

Sensitive data is leaking out

Netskope’s findings suggest that these privacy lapses are not just risks — they’re happening routinely. Of all data policy violations in health care organizations, 81% involved regulated data like PHI. The remaining 19% included intellectual property, source code, and internal secrets.

Personal cloud storage apps like Google Drive, OneDrive and Amazon S3 are also being misused. These apps are often used by well-meaning employees trying to save time, but without proper authorization, they become major compliance risks. “Health care organizations must balance the benefits of genAI with the implementation of strict data governance policies to mitigate associated risks,” Netskope warns.

Malware threats via trusted platforms

Beyond data privacy, Netskope found a sharp rise in malware threats delivered through commonly used cloud apps. In 2025, 13% of health care organizations experienced malware downloads from GitHub.

Other cloud services commonly exploited by attackers include:

  • Microsoft OneDrive
  • Amazon S3
  • Google Drive

Threat actors are increasingly using social engineering to trick staff into downloading infostealers and ransomware via these platforms. Once malware gains a foothold, it can spread through networks, steal credentials or deploy ransomware payloads.

DLP use is growing — but gaps remain

To fight back, more organizations are deploying data loss prevention (DLP) tools. Use of DLP has jumped from 31% to 54% over the past year, signaling growing awareness of the risks associated with unsanctioned genAI use.

Commonly blocked genAI apps include:

  • DeepAI (44% of organizations)
  • Tactiq (40%)
  • Scite (36%)

Netskope encourages health care leaders to audit app use regularly and to block tools that pose a disproportionate risk or have weaker security controls. Blocking also helps redirect workers to safer, enterprise-approved tools.

It’s not just tech — it’s a training issue

While much of the threat comes from tech misuse, the root problem often lies in human behavior. Workers use “shadow AI” — unsanctioned AI tools — because it’s fast and easy. But that convenience can lead to costly HIPAA violations.

To counter this, Netskope recommends that health care organizations:

  • Block nonessential or high-risk apps
  • Scan all web and cloud traffic for phishing and malware
  • Use Remote Browser Isolation (RBI) for risky domains
  • Enforce DLP policies that flag uploads of PHI, source code or intellectual property
  • Strengthen employee training to recognize risky behavior

Netskope’s report doesn’t suggest that AI shouldn’t be used in health care — far from it. It argues for thoughtful, secure adoption. The tools are powerful, but without the right guardrails, they can just as easily become vulnerabilities.

In short, the health care sector doesn’t just need smarter tech. It needs smarter oversight — and a renewed focus on training, transparency and patient trust.

Related Videos