
Good news: AI can’t replace primary care physicians…yet
AI models evaluated for accuracy in providing preventive medicine recommendations fare poorly compared to humans
Recent studies have highlighted gaps in accessing
A recent study in the
The findings revealed varying degrees of accuracy in the responses provided by ChatGPT-4 and Bard. ChatGPT-4 demonstrated 28.6% accurate responses, 42.8% accurate with missing information, and 28.6% inaccurate responses. In contrast, Bard exhibited 53.6% accurate responses, 28.6% accurate with missing information, and 17.8% inaccurate responses.
Notably, both AI models struggled with immunization-related questions, with considerable inaccuracies observed. Additionally, ChatGPT-4's knowledge base showed limitations due to outdated recommendations, highlighting the importance of continuous updates in AI systems. Bard, with its ability to continually update with information, demonstrated higher accuracy rates, albeit with room for improvement in certain areas, according to the study.
While AI tools can provide valuable information, particularly in patient education, they should not be seen as replacements for physicians, researchers say. The study emphasized the importance of using AI as supplementary resources, rather than sole sources of medical advice. Furthermore, researchers say it underscores the need for ongoing evaluation and updates of AI tools to ensure their effectiveness and relevance in health care settings.
While AI models like ChatGPT-4 and Bard show promise in providing medical recommendations, their accuracy and relevance need improvement, according to the study authors.
Newsletter
Stay informed and empowered with Medical Economics enewsletter, delivering expert insights, financial strategies, practice management tips and technology trends — tailored for today’s physicians.