CRAFT: A Neuro-Symbolic Framework for Visual Functional Affordance Grounding

Published: 29 Aug 2025, Last Modified: 29 Aug 2025NeSy 2025 - Phase 2 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Functional affordance grounding, Neuro-symbolic reasoning, Embodied AI
TL;DR: We propose a neuro-symbolic framework for functional affordance grounding that combines symbolic priors with neural visual grounding to identify action-relevant objects in cluttered scenes.
Abstract: We introduce CRAFT, a neuro-symbolic framework for interpretable affordance grounding, which identifies the objects in a scene that enable a given action (e.g., “cut”). CRAFT integrates structured commonsense priors from ConceptNet and language models with visual evidence from CLIP, using an energy-based reasoning loop to refine predictions iteratively. This process yields transparent, goal-driven decisions to ground symbolic and perceptual structures. Experiments in multi-object, label-free settings demonstrate that CRAFT enhances accuracy while improving interpretability, providing a step toward robust and trustworthy scene understanding.
Track: Neurosymbolic Methods for Trustworthy and Interpretable AI
Paper Type: Short Paper
Resubmission: No
Publication Agreement: pdf
Submission Number: 28
Loading