Comparison of a Novel Machine Learning-Based Clinical Query Platform With Traditional Guideline Searches for Hospital Emergencies: Prospective Pilot Study of User Experience and Time Efficiency

Hamza Ejaz, Hon Lung Keith Tsui, Mehul Patel, Luis Rafael Ulloa Paredes, Ellen Knights, Shah Bakht Aftab, Christian Peter Subbe

Research output: Contribution to journalArticlepeer-review

Abstract

Emergency and acute medicine doctors require easily accessible evidence-based information to safely manage a wide range of clinical presentations. The inability to find evidence-based local guidelines on the trust's intranet leads to information retrieval from the World Wide Web. Artificial intelligence (AI) has the potential to make evidence-based information retrieval faster and easier. The aim of the study is to conduct a time-motion analysis, comparing cohorts of junior doctors using (1) an AI-supported search engine versus (2) the traditional hospital intranet. The study also aims to examine the impact of the AI-supported search engine on the duration of searches and workflow when seeking answers to clinical queries at the point of care. This pre- and postobservational study was conducted in 2 phases. In the first phase, clinical information searches by 10 doctors caring for acutely unwell patients in acute medicine were observed during 10 working days. Based on these findings and input from a focus group of 14 clinicians, an AI-supported, context-sensitive search engine was implemented. In the second phase, clinical practice was observed for 10 doctors for an additional 10 working days using the new search engine. The hospital intranet group (n=10) had a median of 23 months of clinical experience, while the AI-supported search engine group (n=10) had a median of 54 months. Participants using the AI-supported engine conducted fewer searches. User satisfaction and query resolution rates were similar between the 2 phases. Searches with the AI-supported engine took 43 seconds longer on average. Clinicians rated the new app with a favorable Net Promoter Score of 20. We report a successful feasibility pilot of an AI-driven search engine for clinical guidelines. Further development of the engine including the incorporation of large language models might improve accuracy and speed. More research is required to establish clinical impact in different user groups. Focusing on new staff at beginning of their post might be the most suitable study design. [Abstract copyright: © Hamza Ejaz, Hon Lung Keith Tsui, Mehul Patel, Luis Rafael Ulloa Paredes, Ellen Knights, Shah Bakht Aftab, Christian Peter Subbe. Originally published in JMIR Human Factors (https://humanfactors.jmir.org).]
Original languageEnglish
Article numbere52358
JournalJMIR human factors
Volume12
DOIs
Publication statusPublished - 25 Feb 2025

Keywords

  • Prospective Studies
  • training
  • clinical experience
  • developing
  • emergency care
  • Humans
  • user group
  • testing
  • Pilot Projects
  • clinical impact
  • clinical practice
  • hospital care
  • study design
  • Machine Learning
  • mobile phone
  • information search
  • user satisfaction
  • Time and Motion Studies
  • information retrieval
  • Emergencies
  • machine learning
  • Emergency Service, Hospital
  • users
  • Search Engine - methods
  • artificial intelligence

Fingerprint

Dive into the research topics of 'Comparison of a Novel Machine Learning-Based Clinical Query Platform With Traditional Guideline Searches for Hospital Emergencies: Prospective Pilot Study of User Experience and Time Efficiency'. Together they form a unique fingerprint.

Cite this