TAMPering with RLBench: Enabling joint developments in Task and Motion Planning and Reinforcement Learning research
Keywords: TAMP, RL, Robotics, Simulation, Benchmark
TL;DR: This paper introduces an enhanced RLBench-based platform that improves data generation and supports research in TAMP, RL, and neuro-symbolic AI for better robotic planning and learning.
Abstract: The field of robotics, spanning task and motion planning (TAMP), hierarchical reinforcement learning (HRL), and neuro-symbolic AI, faces challenges in handling complex long-horizon tasks with sparse rewards. Although planning approaches show potential, scalability is limited by the lack of accurate world models and symbolic abstractions. More reliable data are needed to support learning these representations and unifying fragmented subcommunities. This paper presents an enhanced simulation platform, built on RLBench, designed to meet the need for efficient data generation. While RLBench was created purely for reinforcement learning (RL) research, our simulator generates a richer variety of data required for the research fields TAMP, RL, and neuro-symbolic AI, supporting the study of symbolic and composable representations, multimodal inputs, and hierarchical abstractions. Our platform supports the evaluation of generalizable and interpretable world models, addressing key data generation challenges in robotics. This can foster collaboration between fragmented research areas and contributes to the development of robust and scalable systems for robotic planning.
Submission Number: 39
Loading