LEVERAGING AUXILIARY TEXT FOR DEEP RECOGNITION OF UNSEEN VISUAL RELATIONSHIPSDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Desk Rejected SubmissionReaders: Everyone
Keywords: computer vision, natural language processing, visual relationship detection, scene graph generation, few shot learning
Abstract: One of the most difficult tasks in \emph{scene understanding} is recognizing interactions between objects in an image. This task is often called \emph{visual relationship detection} (VRD). We consider the question of whether, given auxiliary textual data in addition to the standard visual data used for training VRD models, VRD performance can be improved. We present a new deep model that can leverage additional textual data. Our model relies on a shared text--image representation of subject-verb-object relationships appearing in the text, and object interactions in images. Our method is the first to enable recognition of visual relationships missing in the visual training data and appearing only in the auxiliary text. We test our approach on two different text sources: text originating in images and text originating in books. We test and validate our approach using two large-scale recognition tasks: VRD and Scene Graph Generation. We show a surprising result: Our approach works better with text originating in books, and outperforms the text originating in images on the task of unseen relationship recognition. It is comparable to the model which utilizes text originating in images on the task of seen relationship recognition.
Original Pdf: pdf
1 Reply

Loading