Abstract: In this paper, we present a method that stitches multiple videos captured with a fixed camera rig. We propose an Extended-MeshFlow motion model for video stitching. Firstly, uniform features are detected and matched at the overlapping region, from which the Exended-MeshFlow model is estimated. The model then warps the adjacent views to the common central view to eliminate the spatial misalignment. The motions located on the feature positions are interpolated to the mesh vertexes by Multilevel B-Spline Approximation(MBA). Collecting the motions on the vertexes form the vertex profiles, which are smoothed for temporal consistency. During the smoothing, only previous frames are required, thus the proposed method can stitch videos in an online mode. Experimental results on various of videos demonstrate that the proposed method can produce comparable stitching results in aspects of spatial alignment and temporal coherence.
Loading