PARSEC: Preference Adaptation for Robotic Object Rearrangement from Scene Context

Published: 07 May 2025, Last Modified: 07 May 2025ICRA Workshop Human-Centered Robot LearningEveryoneRevisionsBibTeXCC BY 4.0
Workshop Statement: Our primary contribution in this work is a novel object rearrangement benchmark, PARSEC, in which the robot adapts to a user’s organizational preferences from the observed scene context, ensuring user-aligned object placements. We evaluate various existing personalized rearrangement models, along with our proposed rearrangement model ContextSortLM, on the PARSEC benchmark through computational experiments and crowdsourced user evaluations. Our findings highlight existing challenges in adapting to novel rearrangement preferences in previously unseen environments and provide design guidelines for future personalized rearrangement models. Our work closely relates to the theme of human-centered robot learning from foundational models. PARSEC is grounded in a human-centered application of household robots rearranging objects in human environments. By comparing various LLM-based rearrangement models, our evaluation provides guidelines on how to leverage pre-trained commonsense knowledge to adapt to user preferences from the observed scene context. Moreover, our work presents a new interaction paradigm where robots learn user preferences from observation without any instruction or demonstrations and opens up new avenues of research in active perception for personalized object rearrangement.
Keywords: object rearrangement, user personalization, robot assistance, semantic scene understanding, LLM
TL;DR: PARSEC is a new robotics benchmark for personalized object rearrangement, enabling robots to adapt to user preferences without instruction. We evaluate prior rearrangement methods on PARSEC and propose a novel algorithm, ContextSortLM.
Abstract: Object rearrangement is a key task for household robots requiring personalization without explicit instructions, meaningful object placement in environments occupied with objects, and generalization to unseen objects and new environments. To facilitate research addressing these challenges, we introduce PARSEC, an object rearrangement benchmark for learning user organizational preferences from observed scene context to place objects in a partially arranged environment. PARSEC is built upon a novel dataset of 110K rearrangement examples crowdsourced from 72 users, featuring 93 object categories and 15 environments. We also propose ContextSortLM, an LLM-based rearrangement model that places objects in partially arranged environments by adapting to user preferences from prior and current scene context while accounting for multiple valid placements. We evaluate ContextSortLM and existing personalized rearrangement approaches on the PARSEC benchmark and complement these findings with a crowdsourced evaluation of 108 online raters ranking model predictions based on alignment with user preferences. Our results indicate that personalized rearrangement models leveraging multiple scene context sources perform better than models relying on a single context source. Moreover, ContextSortLM outperforms other models in placing objects to replicate the target user's arrangement and ranks among the top two in all three environment categories, as rated by online evaluators. Importantly, our evaluation highlights challenges associated with modeling environment semantics across different environment categories and provides recommendations for future work.
Submission Number: 22
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview