A simple yet effective knowledge guided method for entity-aware video captioning on a basketball benchmark
Abstract: Despite the recent emergence of video captioning models, how to generate the text description with specific
entity names and fine-grained actions is far from being solved, which however has great applications such as
basketball live text broadcast. In this paper, a new basketball benchmark for entity-aware video captioning is
proposed. Specifically, we construct a multimodal basketball game knowledge graph (KG_NBA_2022) storing
basketball game records as well as detailed information on teams and players. Then, a multimodal basketball
game video captioning (VC_NBA_2022) dataset that contains 9 types of fine-grained shooting events and 286
players’ knowledge (i.e., images and names) is constructed based on KG_NBA_2022 in an automatic approach.
We also develop a simple yet effective knowledge guided entity-aware video captioning network (KEANet)
based on a candidate player list in an encoder–decoder form for basketball live text broadcast. The temporal
contextual information in video is encoded by introducing the Bi-directional Gated Recurrent Unit (Bi-GRU)
module. And the entity-aware module is designed to model relationships among players and emphasize key
players. Extensive experiments on multiple sports benchmarks demonstrate that KEANet effectively leverages
additional knowledge and outperforms advanced video captioning models
Loading