Abstract: This paper addresses the problem of vision-based object geolocation using Unmanned Aerial Vehicles in Search and Rescue settings. It focuses on the task of automatically and accurately geolocating objects of different classes, focusing on human bodies, to provide a map of the detected objects as salient locations. Such maps can be used by responders to plan rescue operations or by other robotic platforms where geolocation is necessary, such as with delivery of medical supplies. The proposed solution strategy leverages recent developments in the field of Convolutional Neural Networks for vision-based object detection with a method for fusing detections. Occupancy probabilities of locations in the environment containing objects of specific classes, or lack thereof, are also computed. This is achieved by taking advantage of a novel sensor model for fusing vision-based detections using both positive and negative observations. The method is validated in simulation as well as with real field experiments.
External IDs:dblp:conf/miwai/RudolWD24
Loading