• Revenue Cycle Management
  • COVID-19
  • Reimbursement
  • Diabetes Awareness Month
  • Risk Management
  • Patient Retention
  • Staffing
  • Medical Economics® 100th Anniversary
  • Coding and documentation
  • Business of Endocrinology
  • Telehealth
  • Physicians Financial News
  • Cybersecurity
  • Cardiovascular Clinical Consult
  • Locum Tenens, brought to you by LocumLife®
  • Weight Management
  • Business of Women's Health
  • Practice Efficiency
  • Finance and Wealth
  • EHRs
  • Remote Patient Monitoring
  • Sponsored Webinars
  • Medical Technology
  • Billing and collections
  • Acute Pain Management
  • Exclusive Content
  • Value-based Care
  • Business of Pediatrics
  • Concierge Medicine 2.0 by Castle Connolly Private Health Partners
  • Practice Growth
  • Concierge Medicine
  • Business of Cardiology
  • Implementing the Topcon Ocular Telehealth Platform
  • Malpractice
  • Influenza
  • Sexual Health
  • Chronic Conditions
  • Technology
  • Legal and Policy
  • Money
  • Opinion
  • Vaccines
  • Practice Management
  • Patient Relations
  • Careers

Generative AI

Blog
Article
Medical Economics JournalMedical Economics March 2024
Volume 101
Issue 3

Its value and risks for physicians

AI risks and rewards: ©Pingpao - stock.adobe.com

AI risks and rewards: ©Pingpao - stock.adobe.com

Generative artificial intelligence (AI) is making rapid inroads in enterprises across sectors of the economy including health care. Generative AI really burst on the scene in November 2022 when AI lab OpenAI released a generative AI-powered chatbot called ChatGPT. Monthly active users of the technology jumped to an estimated 100 million in just two months after launch, according to a Reuters report. Since then, organizations have been racing to adopt the technology to reduce operational costs, boost productivity and enhance customer experience.

Health care enterprises from large provider systems to physicians’ offices and practices are also racing to adopt generative AI. These organizations are looking to leverage the technology to help reduce administrative burdens, increase operational efficiency and improve patient outcomes.

Although this technology holds much promise, as with any new technology, there are risks associated with it. To adopt generative AI applications in a safe and secure manner, physicians need to understand where it can add value to their offices and practices and how to mitigate the risks associated with deploying the technology.

Use cases are adding value

Doctors are ready to embrace generative AI to take advantage of the technology’s power to streamline workflows and support clinical decision-making. A report by Elsevier Health revealed that 42% of clinicians indicated that physician use of generative AI technologies in the next two to three years is desirable.

Physician readiness to adopt generative AI technology is fueled primarily by its ability to reduce administrative burdens and enhance patient experience. For example, physicians are exploring using generative AI to automate clinical documentation during patient exams and prepopulate visit summaries in electronic health records (EHRs). This automation can significantly alleviate the administrative burdens of manual documentation and coding.

Physicians can also use generative AI in their practices to respond to patient inquiries more quickly, generate personalized follow-up emails and appointment reminders and schedule appointments based on doctor availability and patient needs. Translating medical terminology into layman’s terms to help patients better understand diagnoses, procedures and treatment plans is another way physicians can leverage generative AI.

New use cases for generative AI in health care continue to evolve. For example, Google Cloud recently announced AI-powered search capabilities designed to help clinicians quickly access information from different data sources, such as clinical notes and medical data in EHRs.

All these generative AI use cases can help reduce the growing administrative burdens that are leading to physician burnout. That is critical considering the United States faces a projected shortage of up to 124,000 physicians by 2034.

Generative AI risks

Although physicians may be ready to embrace generative AI, before deploying the technology, they must first understand its risks and how to mitigate them. Among these are the following:

Cybersecurity risks: Cyberattacks continue to plague the health care sector as criminals work to gain access to sensitive patient data. A recent report from Proofpoint and the Ponemon Institute on cybersecurity in health care found that 88% of surveyed organizations had experienced an average of 40 attacks in the past 12 months. Data from Check Point Software Technologies revealed that the average number of weekly cyberattacks in the health care sector in 2022 reached 1,463, a 74% increase over 2021.

The growing use of generative AI can widen the attack surface in physician practices. Generative AI technologies are trained on and store large amounts of data, making them attractive targets for cybercriminals intent on stealing protected health information (PHI).

Sensitive medical information can also be fed into AI applications by employees. Data leaving the confines of internal systems pose a significant cybersecurity risk to all businesses.

Privacy risks: Generative AI technology also presents a risk to patient privacy. PHI fed into AI applications can be used to train the application’s AI models and potentially be shared with other users. This increases the risk of unauthorized access to or misuse of personal data. It can also put physician practices at risk of noncompliance with Health Insurance Portability and Accountability Act (HIPAA) regulations that require health care organizations to protect the privacy and security of health information.

Further elevating privacy risk is lack of transparency. Organizations using external generative AI applications often lack visibility into how these apps collect, use, share and delete data.

Data accuracy: Data accuracy is another concern with generative AI. The technology can sometimes “hallucinate,” generating inaccurate information and fabrications that are presented in a credible way. Using or basing decisions on inaccurate data is, to say the least, problematic in health care, where there is no margin for error.

Mitigating the risks of generative AI

While generative AI holds a lot of promise for physicians, they should take steps to reap all the benefits of this technology safely and securely.

When using any external application, it is important to review each solution provider’s terms of service and data protection and security policies. It is also important to conduct due diligence to determine whether the application uses encryption, data are anonymized and the application complies with HIPAA and other applicable privacy regulations.

In addition, physicians should develop and implement policies governing the use of generative AI in their practices. The policy should not only spell out which tools employees are permitted to use but what information they may feed into them. Strict access controls to data can reduce the risk of sensitive information being fed into AI applications and better safeguard patient privacy.

To mitigate concerns surrounding data accuracy, all AI-generated data and content should be evaluated and validated to ensure these outputs are accurate. Human oversight is needed before using or making decisions based on information generated by AI.

AI offers exciting opportunities for physicians but, as with any new technology, it comes with uncertainties and risks. By understanding these risks and taking steps to mitigate them, physicians can more safely and securely leverage the value of this technology in their practices to help reduce administrative burdens, increase operational efficiency and improve patient outcomes.

Anurag Lal is CEO of NetSfere.

Related Videos