Abstract: Weed detection plays a critical role in improving agricultural production. Distinguishing crops from weeds is vital for achieving precise spraying for weeds without polluting the ecological environment as a whole. Many computer vision methods have been proposed to achieve reliable weed detection with relatively high speed. However, the lack of publicly available datasets hinders the efforts to improve the work in this field. In this paper, a manually labeled dataset for sugar beets and sunflowers was developed. Different state-of-the-art single-short object detection architectures and methods were trained on the two datasets. A comparison of the various methods to detect weeds is presented. The primary result is that the You Look Only Once (YOLO) family of architectures were better at weed detection for these data sets than alternative architectures like RatinaNet, EfficientDet and Detection Transformer (DETR). One key challenge was the detection of smaller weeds.
Loading