Abstract: Existing surface defect semantic segmentation methods are limited by costly annotated data and are unable to cope with new or rare defect types. Zero-shot learning offers a new possibility for addressing this issue by reducing reliance on extensive annotated data. However, methods that solely rely on image information waste the valuable experience that humans have accumulated in the field of defect detection. In this work, we propose a human-guided segmentation network (HGNet) based on CLIP, introducing human guidance to address the data scarcity and effectively leverage expert knowledge, leading to more accurate and reliable surface defect segmentation. HGNet, guided by the human-provided text, consists of two novel modules: 1) attention-based multilevel feature fusion (AMFF) which effectively integrates multilevel features using attention mechanisms to enhance the fine-grained information capture and 2) multimodal feature adaptive balancing (MFAB) which aligns and balances multimodal features through dynamic adjustment and optimization. Moreover, we extend HGNet to HGNet+ by incorporating interactive learning to correct segmentation errors with human-provided points. Our proposed method can generalize to unseen classes without additional training samples for retraining, meeting the practical needs of industrial defect detection. Extensive experiments on Defect- $4^{i}$ (and MVTec-ZSS) demonstrate that our method outperforms the state-of-the-art zero-shot methods by 5.7%/7.81% (6.57%/8.06%) and is even comparable to the performance of existing few-shot methods.
Loading