Abstract: State of the art architectures for untrimmed video Temporal Action Localization (TAL) have only considered RGB
and Flow modalities, leaving the information-rich audio
modality totally unexploited. Audio fusion has been explored for the related but arguably easier problem of
trimmed (clip-level) action recognition. However, TAL
poses a unique set of challenges. In this paper, we propose simple but effective fusion-based approaches for TAL.
To the best of our knowledge, our work is the first to jointly
consider audio and video modalities for supervised TAL. We
experimentally show that our schemes consistently improve
performance for state of the art video-only TAL approaches.
Specifically, they help achieve new state of the art performance on large-scale benchmark datasets - ActivityNet1.3 (54.34 mAP@0.5) and THUMOS14 (57.18 mAP@0.5).
Our experiments include ablations involving multiple fusion schemes, modality combinations and TAL architectures. Our code, models and associated data are available
at https://github.com/skelemoa/tal-hmo
0 Replies
Loading