DarkSeg: Infrared-Driven Semantic Segmentation for Garment Grasping Detection in Low-Light Conditions
Poster: pdf
Attend In Person: Yes
Keywords: garment grasping, semantic segmentation
Abstract: Garment grasping in low-light environments remains a critical yet underexplored challenge for domestic service robots. Insufficient illumination leads to sparse visual features, causing ambiguous similarities across garment categories and impairing reliable recognition. While conventional approaches employ infrared–visible multimodal fusion to mitigate this issue, their heavy computational overhead limits real-time deployment on resource-constrained robotic platforms. To overcome these limitations, we propose DarkSeg, a student–teacher model designed for low-light garment detection. Unlike multimodal fusion methods, DarkSeg leverages an indirect feature alignment mechanism, where the student model learns illumination-invariant structural representations from infrared features provided by the teacher model. This effectively compensates for structural deficiencies in low-light imagery while maintaining computational efficiency. To further validate DarkSeg in practical robotic applications, we introduce a depth-perceptive grasping strategy and construct DarkClothes, a low-light multimodal garment dataset. Experiments on a Baxter robot demonstrate that DarkSeg improves the garment grasping success rate by 22%, while reducing parameters by 99.08M compared to traditional methods, highlighting its effectiveness and feasibility for real-world deployment.
Submission Number: 9
Loading