The Horcrux: Mechanistically Interpretable Task Decomposition for Detecting and Mitigating Reward Hacking in Embodied AI Systems

Published: 27 Nov 2025, Last Modified: 27 Nov 2025E-SARS PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reward hacking, Mechanistic interpretability, Task Decomposition, AI Safety
TL;DR: A Novel Mechanistic Interpretable architecture which focuses on task decomposition and checks the Reward hacking paradigm in Embedded AI systems.
Abstract: Embodied AI agents exploit reward signal flaws through reward hack ing—achieving high proxy scores while failing true objectives. We introduce Mechanistically Interpretable Task Decomposition (MITD), a hierarchical trans former architecture with Planner, Coordinator, and Executor modules that detects and mitigates reward hacking. MITD decomposes tasks into interpretable subtasks while generating diagnostic visualizations including Attention Waterfall Diagrams and Neural Pathway Flow Charts. Experiments on 1,000 hh-rlhf samples reveal optimal decomposition depths of 12-25 steps reduce reward hacking frequency by 34% across four failure modes. We delivered novel paradigms that demonstrate the interpretable way to detect more effective reward hacking than post-hoc behavioral monitoring.
Submission Number: 16
Loading