everyone
since 18 Feb 2025">EveryoneRevisionsBibTeXCC BY 4.0
This paper explores how large language model-based robots assist in detecting anomalies in high-risk environments and how users perceive their usability and reliability in a safe virtual environment. We present a system where a robot using a state-of-the-art vision-language model autonomously annotates potential hazards in a virtual world. The system provides users with contextual safety information via a VR interface. We conducted a user study to evaluate the system's performance across metrics such as trust, user satisfaction, and efficiency. Results demonstrated high user satisfaction and clear hazard communication, while trust remained moderate.