Abstract: We focus on multi-modal fusion for egocentric action
recognition, and propose a novel architecture for multimodal temporal-binding, i.e. the combination of modalities
within a range of temporal offsets. We train the architecture
with three modalities – RGB, Flow and Audio – and combine them with mid-level fusion alongside sparse temporal
sampling of fused representations. In contrast with previous
works, modalities are fused before temporal aggregation,
with shared modality and fusion weights over time. Our
proposed architecture is trained end-to-end, outperforming
individual modalities as well as late-fusion of modalities.
We demonstrate the importance of audio in egocentric
vision, on per-class basis, for identifying actions as well as
interacting objects. Our method achieves state of the art
results on both the seen and unseen test sets of the largest
egocentric dataset: EPIC-Kitchens, on all metrics using the
public leaderboard.
0 Replies
Loading