Abstract: Omnidirectional vision systems provide a 360-degree panoramic view, enabling full environmental awareness in various fields, such as Advanced Driver Assistance Systems (ADAS) and Virtual Reality (VR). Existing omnidirectional stitching methods rely on a single specialized 360-degree camera. However, due to hardware limitations such as high mounting heights and blind spots, adapting these methods into vehicles of varying sizes and geometries is challenging. These challenges include limited generalizability due to the reliance on predefined stitching regions for fixed camera arrays, performance degradation from distance parallax leading to large depth differences, and the absence of suitable datasets with ground truth for multi-camera omnidirectional systems. To overcome these challenges, we propose a novel omnidirectional stitching framework and publicly available dataset tailored for varying distance scenarios with multiple cameras. The framework, referred to as OmniStitch, consists of a Stitching Region Maximisation (SRM) module for automatic adaptation to different vehicles with multiple cameras and a Depth-Aware Stitching (DAS) module to handle depth differences caused by distance parallax between cameras. In addition, we create and release an omnidirectional stitching dataset, called GV360, which provides ground truth images that maintain the perspective of the 360-degree FOV, specifically designed for vehicle-agnostic systems. Extensive evaluations of this dataset demonstrate that our framework outperforms state-of-the-art stitching models, especially in handling varying distance parallax. The proposed dataset and code are publicly available in URL.
Primary Subject Area: [Experience] Multimedia Applications
Secondary Subject Area: [Experience] Interactions and Quality of Experience
Relevance To Conference: This work proposes a novel omnidirectional image stitching framework for dynamic distance parallax between images with multiple camera settings. The proposed framework improves the generalizability of the stitching method by addressing the difficulty of deciding the stitching region for multiple cameras. Additionally, it considers the large depth differences caused by distance parallax between different cameras. Due to the lack of an omnidirectional stitching dataset for vehicle-agnostic systems, this work also created an omnidirectional stitching dataset called GV360. This dataset contains ground-truth images that preserve the perspective of 360-degree coverage and provide simulated dynamic distance parallax scenarios for different vehicles.
Supplementary Material: zip
Submission Number: 3013
Loading