News
Article
Despite rising guardrails, health care workers continue using personal artificial intelligence and cloud apps — often in ways that violate HIPAA and put patients’ trust at risk.
© VZ_Art - stock.adobe.com
A growing number of health care workers are turning to generative artificial intelligence (AI) tools like ChatGPT and Google Gemini to lighten their workloads. But many are doing it without the proper safeguards — and in ways that could be violating federal privacy laws.
That warning comes from Netskope’s 2025 Threat Labs Healthcare report, which analyzed cloud app usage, AI trends and data violations across health systems. It found sensitive patient data — including protected health information (PHI) — is frequently uploaded to tools and platforms that aren’t HIPAA-compliant, putting both patients and clinicians at risk.
“Beyond financial consequences, breaches erode patient trust and damage organizational credibility with vendors and partners,” says Ray Canzanese, director of Netskope Threat Labs.
The health care industry embraced AI faster than many other industries, and most organizations now use at least some form of generative AI to streamline operations or reduce administrative load. According to Netskope’s data:
At the same time, 71% of health care workers are still using personal AI accounts for work — down from 87% the year before, but still worryingly high. Since most public AI tools like ChatGPT and Gemini do not sign business associate agreements (BAAs) or meet HIPAA compliance standards, using them with PHI is a potential violation.
Netskope’s findings suggest that these privacy lapses are not just risks — they’re happening routinely. Of all data policy violations in health care organizations, 81% involved regulated data like PHI. The remaining 19% included intellectual property, source code, and internal secrets.
Personal cloud storage apps like Google Drive, OneDrive and Amazon S3 are also being misused. These apps are often used by well-meaning employees trying to save time, but without proper authorization, they become major compliance risks. “Health care organizations must balance the benefits of genAI with the implementation of strict data governance policies to mitigate associated risks,” Netskope warns.
Beyond data privacy, Netskope found a sharp rise in malware threats delivered through commonly used cloud apps. In 2025, 13% of health care organizations experienced malware downloads from GitHub.
Other cloud services commonly exploited by attackers include:
Threat actors are increasingly using social engineering to trick staff into downloading infostealers and ransomware via these platforms. Once malware gains a foothold, it can spread through networks, steal credentials or deploy ransomware payloads.
To fight back, more organizations are deploying data loss prevention (DLP) tools. Use of DLP has jumped from 31% to 54% over the past year, signaling growing awareness of the risks associated with unsanctioned genAI use.
Commonly blocked genAI apps include:
Netskope encourages health care leaders to audit app use regularly and to block tools that pose a disproportionate risk or have weaker security controls. Blocking also helps redirect workers to safer, enterprise-approved tools.
While much of the threat comes from tech misuse, the root problem often lies in human behavior. Workers use “shadow AI” — unsanctioned AI tools — because it’s fast and easy. But that convenience can lead to costly HIPAA violations.
To counter this, Netskope recommends that health care organizations:
Netskope’s report doesn’t suggest that AI shouldn’t be used in health care — far from it. It argues for thoughtful, secure adoption. The tools are powerful, but without the right guardrails, they can just as easily become vulnerabilities.
In short, the health care sector doesn’t just need smarter tech. It needs smarter oversight — and a renewed focus on training, transparency and patient trust.