Keywords: Active Preference Learning, Task Planning, Large Language Models
Abstract: Home robots performing personalized tasks must adeptly balance user preferences with environmental affordances.
We focus on organization tasks within constrained spaces, such as arranging items into a refrigerator, where preferences for placement collide with physical limitations.
The robot must infer user preferences based on a small set of demonstrations, which is easier for users to provide than extensively defining all their requirements.
While recent works use Large Language Models (LLMs) to learn preferences from user demonstrations, they encounter two fundamental challenges.
First, there is inherent ambiguity in interpreting user actions, as multiple preferences can often explain a single observed behavior.
Second, not all user preferences are practically feasible due to geometric constraints in the environment.
To address these challenges, we introduce APRICOT, a novel approach that merges LLM-based Bayesian active preference learning with constraint-aware task planning.
APRICOT refines its generated preferences by actively querying the user and dynamically adapts its plan to respect environmental constraints.
We evaluate APRICOT on a dataset of diverse organization tasks and demonstrate its effectiveness in real-world scenarios, showing significant improvements in both preference satisfaction and plan feasibility.
Supplementary Material: zip
Spotlight Video: mp4
Video: https://youtu.be/EwiCVS5JfCY
Website: https://portal-cornell.github.io/apricot/
Code: https://github.com/portal-cornell/apricot
Publication Agreement: pdf
Student Paper: yes
Submission Number: 621
Loading