Verifiable Control and Calibrated Trust in Embodied Neuromorphic Agents for Safety-Critical Applications

Published: 20 Nov 2025, Last Modified: 09 Mar 2026AAAI 2026 TrustAgent Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Agentic AI, Embodied AI, Trustworthy AI, Safety-Critical Systems, Neuromorphic Computing, Spiking Neural Networks, Runtime Assurance, Explainable AI (XAI), Human-in-the-Loop
TL;DR: We present a neuromorphic embodied agent with a runtime safety supervisor and validated explanations, achieving verifiable safety, millisecond latency, and improved human oversight in safety-critical settings.
Abstract: Agentic systems for physical world applications must satisfy strict bounds on latency, energy, and safety. We present an embodied agent architecture, built on spiking neural networks and integrated with a runtime safety supervisor and a calibrated human-in-the-loop interface. The system provides verifiable control through formal safety envelopes and produces auditable evidence objects at runtime. Hardware-in-the-loop validation on a BrainChip Akida neuromorphic processor demonstrates a 92% mission success rate under combined adversarial and environmental faults, with a median inference latency of 1.2 ms and an energy of 45 $\mu$J per inference. A 90-participant study confirms its utility for human oversight, where integrated explanations increased operator diagnostic accuracy from 61.2% to 88.5% and subjective trust from 2.8 to 4.5 on a five-point Likert scale, with cognitive load assessed via the NASA-TLX framework. The findings establish a practical and verifiable paradigm for embodied agency at the edge, complementing LLM-centric architectures by providing a deployable solution for applications where safety and resource efficiency are paramount.
Submission Number: 71
Loading