Designing for Transparency in Human–Robot Interaction: A Dashboard and Custom Hardware for Mechanistic Interpretability

Published: 26 Feb 2026, Last Modified: 12 Mar 2026D-TUR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Human-robot interaction, XAI, mechanistic interpretability, humanoid robotics, transparent design
TL;DR: Augmenting a robot to enhance transparency
Abstract: As embodied AI systems transition from deterministic automation to fluid decision-making driven by systems such as Vision-Language-Action (VLA) models, the inherent opacity of these "black box" systems creates significant barriers to safe and effective human-robot interaction. This paper argues that transparency should not be a post-hoc addition, but a foundational design constraint integrated into the robot's physical and digital architecture. We present a humanoid demonstrator built on the Unitree G1 platform that embodies this holistic design philosophy. Our approach combines custom industrial hardware, including a specialised head unit with a Face User Interface (Face UI) for social signaling, with a real-time digital dashboard that translates complex AI reasoning into natural language and visualises the perception-to-actuation pipeline. To evaluate the efficacy of this multi-layered transparency design, we conducted a user study (N=10) comparing the fully transparent system against a baseline lacking the social interface and dashboard. Results indicate that our architecture improved transparency and understandability of the robot during interaction. These findings suggest that tailoring transparency mechanisms to specific user contexts is critical for the successful deployment of autonomous systems in shared human spaces.
Submission Number: 6
Loading