Abstract: Although existing grasp detection methods have achieved encouraging performance under well-light conditions, repetitive experiments have found that the detection performance would deteriorate drastically under low-light conditions. Although supplementary information can be provided by additional sensors, such as depth camera, the sparse and weak visual features still hinder the improvement of detection accuracy. In order to address these, we propose a visual enhancement guided grasp detection model (VERGNet) to improve the robustness of robotic grasping in low-light conditions. Firstly, a simultaneous grasp detection and low-light feature enhancement framework is designed, which integrates residual blocks with coordinate attention to re-optimize grasping features. Then, the unsupervised low-light feature enhancement strategy is adopted to reduce the dependence on paired data as well as improve the algorithmic robustness to low-light conditions. Extensive experiments are finally conducted on two newly-constructed low-light grasp datasets and the proposed method achieves 98.9% and 91.2% detection accuracy respectively, which are superior to comparative methods. Besides, the effectiveness in our method has also been validated in real-world low-light imaging scenarios.
Loading