Banner

News

Article

Generative AI: Its value and risks for physicians

Author(s):

The technology can improve efficiency and patient outcomes, but users must be wary of privacy hazards and data accuracy

Man holding images representing AI ©Tierney-stock.adobe.com

©Tierney-stock.adobe.com

Generative Artificial Intelligence (AI) is making rapid inroads in enterprises across sectors of the economy including health care. Generative AI really burst on the scene in November 2022 when artificial intelligence lab OpenAI released a generative AI-powered chatbot called ChatGPT. Monthly active users of the technology jumped to an estimated 100 million in just two months after launch, according to a Reuters report.

Since then, organizations have been racing to adopt the technology to reduce operational costs, boost productivity and enhance customer experience.

Health care enterprises from large provider systems to physicians’ offices and practices are also racing to adopt generative AI. These organizations are looking to leverage the technology to help reduce administrative burdens, increase operational efficiency and improve patient outcomes.

While this technology holds much promise, as with any new technology, there are risks associated with it. To safely and securely adopt generative AI applications, physicians need to understand where it can add value to their offices and practices and how to mitigate the risks associated with deploying the technology.

Use cases adding value

Doctors are ready to embrace generative AI to take advantage of the technology’s power to streamline workflows and support clinical decision-making. A report by Elsevier Health revealed that 42% of clinicians indicated that physician use of generative AI technologies in the next two to three years is desirable.

Physician readiness to adopt generative AI technology is fueled primarily by its ability to reduce administrative burdens and enhance patient experience. For example, physicians are exploring using generative AI to automate clinical documentation during patient exams and prepopulate visit summaries in electronic health records (EHRs). This automation can significantly alleviate the administrative burdens of manual documentation and coding.

Physicians can also use generative AI in their practices to respond to patient inquiries more quickly, generate personalized follow-up emails and appointment reminders and schedule appointments based on doctor availability and patient needs. Translating medical terminology into layman’s terms to help patients better understand diagnoses, procedures and treatment plans is another way physicians can leverage generative AI.

New use cases for generative AI in health care continue to evolve. For example, Google Cloud recently announced AI-powered search capabilities designed to help clinicians quickly access information from different data sources, such as clinical notes and medical data in EHRs.

All of these generative AI use cases can help reduce the growing administrative burdens that are leading to physician burnout. That is critical considering the United States faces a projected shortage of up to 124,000 physicians by 2034.

Risks of Generative AI

While physicians may be ready to embrace generative AI, before deploying the technology they must first understand its risks and how to mitigate them Among these are:

Cybersecurity risks: Cyberattacks continue to plague the health care sector as criminals work to gain access to sensitive patient data. A recent report from Proofpoint and the Ponemon Institute on cybersecurity in health care found that 88% of surveyed organizations experienced an average of 40 attacks in the past 12 months. Data from Check Point revealed that average weekly cyberattacks in the health care sector in 2022 reached 1,463, a 74% increase over 2021.

The growing use of generative AI can widen the attack surface in physician practices. Generative AI technologies are trained on and store large amounts of data, making them attractive targets for cybercriminals intent on stealing protected health information (PHI).

Sensitive medical information can also be fed into AI applications by employees. Data leaving the confines of internal systems poses a significant cybersecurity risk to all businesses.

Privacy risks: Generative AI technology also presents a risk to patient privacy. PHI fed into AI applications can be used to train the application’s AI models and potentially be shared with other users. This increases the risk of unauthorized access to or misuse of personal data. It can also put physician practices at risk of noncompliance with HIPAA regulations that require health care organizations to protect the privacy and security of health information

Further elevating privacy risk is lack of transparency. Organizations using external generative AI applications often lack visibility into how these apps collect, use, share and delete data.

Data accuracy: Data accuracy is another concern with generative AI. The technology can sometimes “hallucinate,” generating inaccurate information and fabrications that are presented in a credible way. Using or basing decisions on inaccurate data is, to say the least, problematic in health care where there is no margin for error.

Mitigating the risks of generative AI

While generative AI holds a lot of promise for physicians, they should take steps to safely and securely reap all the benefits of this technology.

When using any external application, it is important to review each solution provider’s terms of service and data protection and security policies. It is also important to conduct due diligence to determine whether the application uses encryption, if data is anonymized and whether the application complies with HIPAA and other applicable privacy regulations.

In addition, physicians should develop and implement policies governing the use of generative AI in their practices. The policy should not only spell out which tools employees are permitted to use but what information they may feed into them. Strict access controls to data can reduce the risk of sensitive information being fed into AI applications and better safeguard patient privacy.

To mitigate concerns surrounding data accuracy, all AI-generated data and content should be evaluated and validated to ensure these outputs are accurate. Human oversight is needed before using or making decisions based on information generated by AI.

AI offers exciting opportunities for physicians but as with any new technology, it comes with uncertainties and risks. By understanding these risks and taking steps to mitigate them, physicians can more safely and securely leverage the value of this technology in their practices to help reduce administrative burdens, increase operational efficiency and improve patient outcomes.

Anurag Lal is CEO of NetSfere

Related Videos
Emma Schuering: ©Polsinelli
Emma Schuering: ©Polsinelli