Keywords: large language models, clinician perceptions, medical education, clinicial workflow, ai trust, ai ethics, ai literacy, ai literacy
TL;DR: A survey of physicians reveals they are using LLMs for efficiency gains despite significant mistrust, a conflict driven by a critical lack of formal training that hinders safe clinical integration.
Abstract: Large language models (LLMs) are rapidly emerging artificial intelligence (AI) tools with potential to transform clinical workflows, yet limited data exist on how physicians engage with them across specialties. We conducted a cross-sectional, mixed-methods survey of physicians across a large hospital system in the Northeastern United States between February and August 2025, using convenience and snowball sampling. The anonymous REDCap survey collected data surrounding demographics, AI familiarity, applications, documentation and communication practices, perceived challenges, and desired features. Quantitative data was analyzed descriptively and qualitative free-text responses thematically analyzed using Google Gemini 2.5 Pro. Fifty-two physicians participated, most commonly from internal medicine (32.7\%) and surgery (26.9\%), followed by pediatrics (17.3\%) and emergency medicine (7.7\%). Overall, 71.2\% reported using at least one AI tool, most often ChatGPT. Among AI users (N=37), frequent applications included differential diagnosis (54.1\%), research and data analysis (43.2\%), documentation or administrative tasks (40.5\%), and medical education (37.8\%). Adoption patterns varied but did not significantly differ. Thematic analysis (N=36) identified four domains: AI as a documentation and communication assistant, knowledge synthesis and decision support, workflow optimization, and barriers \& mistrust. While efficiency gains were reported, concerns regarding accuracy, medicolegal liability, bias, and lack of transparency were widespread, with nearly one-third citing insufficient training as a barrier. Physicians currently adopt LLMs pragmatically within a human-in-the-loop model, but safe and equitable integration will require regulatory clarity, AI literacy, and trust-building aligned with clinical needs.
Submission Number: 123
Loading