Repeatable Pattern Mining for Accurate Subtraction of Backgrounds with Waving Objects in Underwater VideosDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 06 Nov 2023DSAA 2022Readers: Everyone
Abstract: The success of advanced Background Subtraction (BGS) algorithms for dynamic backgrounds is mostly in land scenes such as those in CDNet benchmarks; few handle underwater scenes, since existing underwater video datasets are either in low resolution or with only static backgrounds. Consequently, the lack of reliable BGS support makes supervised Moving-Objects Segmentation (MOS) algorithms much harder to adapt to unknown underwater scenes because of the diversities of the aquatic environments. For example, those trained by the latest underwater image dataset, SUIM, are ineffective in the underwater videos of our experiments.The underwater waving objects (e.g., plants) often render existing BGS algorithms inaccurate due to three types of errors: (a) incompletely identified MOs (Moving Objects), (b) missing MOs, and (c) falsely identified MOs. In this paper, we propose a novel Clustering-Based Multi-State Background Representation (CBMSBR) model to learn and represent the repeatable patterns of waving movements in k background states (i.e., color ranges) per pixel, and thus accurately subtract the background waving objects to reduce these errors. In addition, we further develop a CBMSBR+ model to remove the more challenging background objects in unusually large magnitudes of wavings. Both models come from a basic observation: the video pixels in the waving zones repeatedly switch among multiple background states; e.g., a pixel switches among water state, plant 1 state, and plant 2 state. To test our proposed models, we create experiments using three types of challenging scenarios that each often covers at least two error types, i.e., the scattered MOs scenario covering (b) and (c), the crowded MOs scenario covering (a) - (c), and the slow MOs scenario covering (a) and (c). Experiments on these scenarios demonstrate the accuracy, effectiveness, and efficiency of our models and their applications in MOS improvements.
0 Replies

Loading