Abstract: Recent advances in visual-language machine learning models have demonstrated exceptional ability to use natural language and understand visual scenes by training on large, unstructured datasets. However, this training paradigm cannot produce interpretable explanations for its outputs, requires retraining to integrate new information, is highly resource-intensive, and struggles with certain forms of logical reasoning. One promising solution involves integrating neural networks with external symbolic information systems, forming neural symbolic systems that can enhance reasoning and memory abilities. These neural symbolic systems provide more interpretable explanations to their outputs and the capacity to assimilate new information without extensive retraining. Utilizing powerful pre-trained Vision-Language Models (VLMs) as the core neural component, augmented by external systems, offers a pragmatic approach to realizing the benefits of neural-symbolic integration. This systematic literature review aims to categorize techniques through which visual-language understanding can be improved by interacting with external symbolic information systems.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: Revised discussion section to include organizational principles of the surveyed AVLMs in section '6.1 Domain-Specific Limitations and Augmentation Solutions'.
Added analysis of the most common patterns of AVLM implementations in section '6.2 Common Architectural Patterns in AVLMs'
Added an additional table categorizing the surveyed domains.
Added domain labels to each of the surveyed papers in the appendix.
Assigned Action Editor: ~Fuxin_Li1
Submission Number: 5472
Loading