Word to Sentence Visual Semantic Similarity for Caption Generation: Lessons Learned

Published: 01 Jan 2023, Last Modified: 19 Feb 2025MVA 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: This paper focuses on enhancing the captions generated by image captioning systems. We propose an approach for improving caption generation systems by choosing the most closely related output to the image rather than the most likely output produced by the model. Our model revises the language generation output beam search from a visual context perspective. We employ a visual semantic measure in a word and sentence level manner to match the proper caption to the related information in the image. This approach can be applied to any caption system as a post-processing method.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview