Banner

AI: A powerful tool for improving health care efficiency and safety

News
Article

Technology has the ability to streamline administrative tasks, reduce medical errors

Photo of Vivek Desai credit: RLDatix

Vivek Desai

As workforce shortages, financial constraints and medical errors continue to pervade the US health care industry, hospitals and health systems are under pressure to do more with less while still providing quality patient care. One important consideration for health care organizations is the impact of operations technology, which can be either an efficiency-enabler or a burnout-driver. Implementing connected health care technology rather than point solutions or disparate systems is key to increasing workforce efficiency, reducing burnout, and improving quality.

Connected health care operations involves harnessing actionable data and leveraging artificial intelligence (AI) and insightful analytics to break down silos and improve safety, accuracy and efficiency across the continuum of care. Through interoperable, AI-enabled technology, health care organizations can empower frontline staff, streamline processes, and predict and prevent patient safety incidents, thereby providing safer patient care and more satisfied employees.

Empowering staff through AI and machine learning

Physicians and staff members spend an inordinate amount of time on administrative tasks such as scheduling and documentation, limiting their time spent with patients. One survey of over 1,700 physicians found that on average, 24% of working hours were spent on administrative tasks, with primary care physicians and women reporting spending more time on administrative tasks than other physicians.

Additionally, research has shown that nurses spend only 21% of their time on direct patient care due to clinical documentation and administrative tasks, and that up to 30% of nurses’ administrative tasks could be offloaded to AI.

Machine learning (ML) and AI such as large language models (LLMs) can streamline and optimize administrative workloads such as staff scheduling, policy management, risk management and provider credentialing. When it comes to document and policy management, we now can train AI models on federal, state and institutional policies so that it can identify where an update is required or where two policies are in conflict.

This is a huge time and energy saver for health care organizations, and helps to reduce errors—after all, who can remember thousands of policies at once? LLMs can also streamline the credentialing process by extracting provider data in a standardized format, making it easier to search and input—thus reducing the administrative burden for data entry staff.

Offloading or streamlining administrative tasks through AI can free up physician and staff time and enable more focused, strategic work, leading to higher quality and an improved bottom line. In fact, an estimated 5% to 10% of health care spending can be saved through AI—equivalent to $200-$360 billion. More importantly, reducing the administrative burden on frontline health care workers can reduce burnout and improve retention, leading to more cost savings and improved patient care and safety.

Predicting and preventing adverse events

AI also plays a key role in safer patient care by improving reporting and prevention of harm incidents. Every day, patient safety incidents occur that go unrecorded. Without comprehensive incident reporting, it’s very difficult to learn from patient safety incidents and improve prevention and response. LLMs can automatically populate incident report forms, categorize incidents according to a set taxonomy and identify trends across reports. Not only is this a significant time-saver for an already strained workforce, but as the model becomes even more advanced, AI can help identify and analyze trends, enabling better-informed decision-making and creating a safer environment for all.

By making it easier to report incidents, AI-enabled risk reporting can help to break health care’s “wall of silence” around medical errors. Mistakes happen, and health care workers shouldn’t be ashamed to report an incident or afraid of retaliation and job loss. LLMs are most effective when they can learn from large amounts of data, so health care organizations need to report every incident that occurs to reap the benefits.

With a culture of safety and responsibility and an incident reporting solution that incorporates AI insights, health systems can extract learnings from medical errors and near-misses so that they can move forward with better safety measures in place.

The challenges of AI in health care

While AI and LLMs have the potential to revolutionize health care organizations’ operational efficiency and safety, the risks and challenges must be understood.Ensuring LLMs remain secure and compliant is the most important consideration when leveraging this technology, and often requires a dedicated in-house team or third-party partner.Security, transparency and accountability should also be key focus points for any organization. Ensuring that you’re actively addressing any privacy concerns, that you understand how a model reached a certain answer and that you have a secure development cycle in place should all be important focus areas for your team.

From an overall technology perspective, misinformation and bias are also potential threats. LLMs can generate content that may seem plausible but is false or misleading, which can be used to spread misinformation and disinformation. The use of deepfakesand synthetic media is another emerging threat to consider. While experts may be able to discern whether certain text, images or audio are real or deepfake created content, that distinction will be much more challenging for the general public. LLMs are only as good as the data they are trained on, and there is also a risk of bias based on factors such as gender, race and other demographics. Training a model with one patient population does not necessarily mean that model will work in another. For example, an AI algorithm used to predict future risk of breast cancer that has been trained on a primarily non-Black population may falsely label Black patients as “low risk.” To reduce errors and bias, LLMs should be trained consistently, and all outputs should be checked by a human.

AI should augment human expertise

AI and LLMs in tandem with connected health care operations can improve safety, accuracy and operational efficiency across the continuum of care. But they aren’t meant to replace human health care workers. These tools should instead be leveraged to enhance the human by driving accuracy and augmenting human expertise.

At the end of the day, health care is a people-driven industry, and the empathy and experience of health care workers are irreplaceable.AI should amplify the strengths and talents of health care workers, freeing up staff to focus on patient care and other high-value tasks and ng our industry a step further towards achieving safer care for all.

Vivek Desai, CISM, is chief technology officer of North America at RLDatix,

Recent Videos
Kyle Zebley headshot
Kyle Zebley headshot
Kyle Zebley headshot
Michael J. Barry, MD
Hadi Chaudhry, President and CEO, CareCloud