Generative to Discriminative Knowledge Distillation for Object Affordance

Published: 01 Jan 2025, Last Modified: 04 Nov 2025ICDL 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In this paper, we present a novel approach to relational object affordance learning by leveraging the knowledge distillation paradigm, where large language models (LLMs) serve as generative teacher models. Distinct from traditional affordance learning approaches, which heavily depend on manual annotations, our approach leverages LLMs to automatically generate binary affordance labels and functional rationale explanations, grounded in object semantics and physical plausibility. This reduces the need for labor-intensive labeling while harnessing the rich semantic knowledge embedded in LLMs. To transfer this knowledge, we train a discriminative student model on the generated outputs, ensuring both predictive accuracy and semantic alignment with the teacher model. The student benefits from dual supervision; affordance labels guide classification, while rationales enhance functional understanding. Experimental results demonstrate that our generative-to-discriminative distillation method improves computational efficiency while maintaining a generalizable understanding of affordances across diverse object-object-action scenarios.
Loading