ENHANCING DRONE AS A FIRST RESPONDER OPERATIONS THROUGH INTEGRATED VEHICLE DETECTION AND AUTONOMOUS CAMERA CONTROL

Published: 01 Oct 2025, Last Modified: 13 Nov 2025RISEx PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: drone, object tracking, first responders, emergency response, autonomous systems
TL;DR: semi-autonomous drone system that automatically detects vehicles and reads license plates to help first responders identify specific vehicles during emergencies without putting people at risk
Abstract: **INTRODUCTION** First responders require rapid and reliable situational awareness in time-critical emergency operations, particularly when ground-based observation is limited or dangerous [1]. Aerial data collection offers a valuable means to address these challenges [2] but, current drones as first responders systems typically require manual operation, necessitating staffing and training. To overcome these limitations, we propose a semi-autonomous, drone-based detection system that integrates vehicle detection, multi-frame tracking, and license plate recognition with autonomous camera control to improve aerial situational awareness by enabling rapid identification of vehicles of interest during active incidents, missing person cases, and evacuation scenarios. **MATERIALS AND METHODS** Our proposed system integrates three computer vision components into a unified pipeline. First, vehicle detection is performed using a YOLO-based model trained on the VisDrone dataset to address challenges of small object detection and variable viewpoints in drone imagery. Second, multi-object tracking is applied to maintain persistent identifiers across the video frames enabling consistent vehicle identification through the video input. Third, license plate reading is achieved by using optical character recognition (EasyOCR) on plates detected by a second YOLO-based model trained on the Roboflow License Plate Recognition dataset. In addition to these components, our system incorporates vision-guided gimbal control, where the bounding box outputs from the vehicle detections are used to reorient the drone’s camera. By autonomously centering the vehicles of interest in the field of view during the dynamic conditions of a flight, the need for manual camera adjustments is reduced. Our pipeline outputs structured data for each frame, including bounding box coordinates, confidence scores, vehicle classifications, extracted license plate text, and gimbal adjustment values. These outputs can be visualized on the original video stream for near real-time monitoring or exported for post-mission analysis (Fig 1). **RESULTS AND DISCUSSION** Experiments were conducted on drone videos under varied lighting, altitudes and viewing angles to evaluate detection, tracking, and license plate recognition performance. The system was able to maintain persistent vehicle detection and identifiers across sequences. The integration of the vision-guided gimbal control improves the detection stability and readability of the vehicle plates. By re-centering vehicles within the camera field of view, the system reduced the target loss during oblique views which also alleviates the need for constant manual camera adjustments which can lower the cognitive burden of operators during emergency operations. **CONCLUSIONS** This work addresses a practical challenge in emergency response operations: providing real-time information to first responders and reducing the need for manual camera adjustments during critical missions. This approach contributes to ongoing efforts to develop more autonomous emergency response tools such as drones that allow operators to focus on decision-making rather than equipment control. The system's modular design enables adoption across different drone platforms and emergency response contexts, supporting broader implementation in search and rescue, disaster response, and incident monitoring operations. **REFERENCES** [1] Gharrad H et al. Transp Res Procedia 84: 209–218, 2025. [2] Chen C et al Drones 7(3): 190, 2023.
Submission Number: 33
Loading