Keywords: Large Language Models, Virtual Reality, Human-Robot Interaction, Hazard Detection, Safety Annotation
TL;DR: We present a system where a robot powered by a state-of-the-art large vision- language model autonomously annotates potential hazards in a virtual world.
Abstract: This paper explores how large language model-based robots assist in detecting anomalies in high-risk environments and how users perceive their usability and reliability in a safe virtual environment. We present a system where a robot using a state-of-the-art vision-language model autonomously annotates potential hazards in a virtual world. The system provides users with contextual safety information via a VR interface. We conducted a user study to evaluate the system's performance across metrics such as trust, user satisfaction, and efficiency. Results demonstrated high user satisfaction and clear hazard communication, while trust remained moderate.
Submission Number: 7
Loading