Analytical Study on Region of Interest and Dataset Size of Vision-based End-to-End Lateral Control for Off-road Autonomy
Keywords: vision-based navigation, deep neural network, behavior cloning, mobile robot
TL;DR: The study examines the autonomy of agricultural robots using vision-based navigation. It assesses how varying input image ROIs impact precision, completion time, and autonomy.
Abstract: Off-road autonomy is a challenging topic for mobile robots since the majority of navigation algorithms were developed for indoor, structured, or even surface outdoor environments. To address this problem, vision-based and Behavior Cloning (BC) style navigation with Deep Neural Networks (DNN) approaches have been proposed. Yet, it has not been clear which area of the input vision data must be focused on and how big the training dataset should be. In this study, we analyzed how variations in the ROI of the input image affect the controller's performance in terms of precision, completion time, and autonomy in off-road navigation.
Our findings indicate that the selection of ROI significantly impacts the DNN controller's ability for off-road autonomous navigation. Specifically, we observe that full-sized input images tend to deteriorate the performance in precision driving tasks, capturing unnecessary details for maneuvering. Conversely, utilizing a cropped ROI, mainly focusing on the upper region of the bottom half of the image, can optimize completion time-related objectives. Furthermore, using a bigger dataset improved autonomy with a selection of the center area of ROI. These insights offer valuable considerations for designing a DNN-based BC controller tailored to specific navigation requirements, balancing performance, and real-world applicability.
Submission Number: 4
Loading