Keywords: Representation learning, manipulation, hypernetworks
TL;DR: We present HyperTASR, a novel framework for task-aware scene representations in robotic manipulation that dynamically adapts perceptual processing based on both task objectives and execution progression through hypernetworks.
Abstract: Effective policy learning for robotic manipulation requires scene representations that selectively capture task-relevant environmental features. Current approaches typically employ task-agnostic representation extraction, failing to emulate the dynamic perceptual adaptation observed in human cognition. We present HyperTASR, a hypernetwork-driven framework that modulates scene representations based on both task objectives and the execution phase. Our architecture dynamically generates representation transformation parameters conditioned on task specifications and progression state, enabling representations to evolve contextually throughout task execution. This approach maintains architectural compatibility with existing policy learning frameworks while fundamentally reconfiguring how visual features are processed. Unlike methods that simply concatenate or fuse task embeddings with task-agnostic representations, HyperTASR establishes computational separation between task-contextual and state-dependent processing paths, enhancing learning efficiency and representational quality. Comprehensive evaluations in both simulation and real-world environments demonstrate substantial performance improvements across different representation paradigms. Most notably, HyperTASR elevates success rates by over 27\% when applied to GNFactor and achieves unprecedented single-view performance exceeding 80\% success with 3D Diffuser Actor. Through ablation studies and attention visualization, we confirm that our approach selectively prioritizes task-relevant scene information, closely mirroring human adaptive perception during manipulation tasks.
Supplementary Material: zip
Spotlight: zip
Submission Number: 240
Loading