Knowledge-Augmented Large Vision-and-Language Assistant

22 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Primary Area: representation learning for computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: vision and language; knowledge augmentation; instruction tuning
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Recent advancements in vision and language foundation models have enabled interaction with visual input through image-based queries and improved human-machine interaction and robotics applications. However, this paper argues that relying solely on visual input from pre-trained models may not suffice, with some tasks demanding additional knowledge. Therefore, it focuses on augmenting pre-trained VL foundation models with external knowledge to enhance performance and applicability. The paper introduces a novel approach that constructs a diverse knowledge database, clusters word knowledge, and integrates it into VL models, such as InstructBLIP, demonstrating improved results in open-ended multi-modal tasks. In summary, the paper contributes an innovative method for knowledge augmentation in VL models, enhancing their performance and applicability to a range of tasks.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4685
Loading