A Trimodal Dataset: RGB, Thermal, and Depth for Human Segmentation and Temporal Action Detection

Published: 01 Jan 2023, Last Modified: 13 Nov 2024DAGM 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Computer vision research and popular datasets are predominantly based on the RGB modality. However, traditional RGB datasets have limitations in lighting conditions and raise privacy concerns. Integrating or substituting with thermal and depth data offers a more robust and privacy-preserving alternative. We present TRISTAR (https://zenodo.org/record/7996570, https://github.com/Stippler/tristar), a public TRImodal Segmentation and acTion ARchive comprising registered sequences of RGB, depth, and thermal data. The dataset encompasses 10 unique environments, 18 camera angles, 101 shots, and 15,618 frames which include human masks for semantic segmentation and dense labels for temporal action detection and scene understanding. We discuss the system setup, including sensor configuration and calibration, as well as the process of generating ground truth annotations. On top, we conduct a quality analysis of our proposed dataset and provide benchmark models as reference points for human segmentation and action detection. By employing only modalities of thermal and depth, these models yield improvements in both human segmentation and action detection.
Loading