LLM-Supported Safety Annotation in High-Risk Environments

13 Feb 2025 (modified: 01 Mar 2025)HRI 2025 Workshop VAM SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Virtual Reality, Human-Robot Interaction, Hazard Detection, Safety Annotation
TL;DR: We present a system where a robot powered by a state-of-the-art large vision- language model autonomously annotates potential hazards in a virtual world.
Abstract: This paper explores how large language model-based robots assist in detecting anomalies in high-risk environments and how users perceive their usability and reliability in a safe virtual environment. We present a system where a robot using a state-of-the-art vision-language model autonomously annotates potential hazards in a virtual world. The system provides users with contextual safety information via a VR interface. We conducted a user study to evaluate the system's performance across metrics such as trust, user satisfaction, and efficiency. Results demonstrated high user satisfaction and clear hazard communication, while trust remained moderate.
Submission Number: 7
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview