Semi-Autonomous Fast Object Segmentation and Tracking Tool for Industrial Applications

Published: 01 Jan 2024, Last Modified: 13 Nov 2024UR 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In the domain of deep learning in computer vision, minimizing the annotation workload for data is crucial. Due to the uniqueness of individual objects, comprehensive data annotation is essential for training deep neural networks. To streamline this process, a partially automated video annotation approach is proposed. The idea is to segment and classify each object with a single click, enabling automatic annotation through interpolating and tracking across subsequent frames, where the object is visible. In this paper, we developed a Fast Object Segmentation and Tracking Tool (FOST), which significantly reduces the labor-intensive nature of labeling image data from videos. Compared to other annotation tools, ours has the capability to automatically segment pre-selected objects in subsequent frames through the utilization of optical flow calculations. FOST is evaluated on three industrial applications. In our tests, we achieve significant results, with segmentation times ranging from approximately 0.14 to 0.29 seconds per frame, contingent on the number of segmented objects within each frame.
Loading