Augmented Vision-Language Models: A Systematic Review
By: Anthony C Davis , Burhan Sadiq , Tianmin Shu and more
Potential Business Impact:
Helps computers explain *why* they see things.
Recent advances in visual-language machine learning models have demonstrated exceptional ability to use natural language and understand visual scenes by training on large, unstructured datasets. However, this training paradigm cannot produce interpretable explanations for its outputs, requires retraining to integrate new information, is highly resource-intensive, and struggles with certain forms of logical reasoning. One promising solution involves integrating neural networks with external symbolic information systems, forming neural symbolic systems that can enhance reasoning and memory abilities. These neural symbolic systems provide more interpretable explanations to their outputs and the capacity to assimilate new information without extensive retraining. Utilizing powerful pre-trained Vision-Language Models (VLMs) as the core neural component, augmented by external systems, offers a pragmatic approach to realizing the benefits of neural-symbolic integration. This systematic literature review aims to categorize techniques through which visual-language understanding can be improved by interacting with external symbolic information systems.
Similar Papers
Synthesizing Visual Concepts as Vision-Language Programs
Artificial Intelligence
Makes AI understand pictures and think logically.
A Survey on Efficient Vision-Language Models
CV and Pattern Recognition
Makes smart AI work on small, slow devices.
Systematic Evaluation of Large Vision-Language Models for Surgical Artificial Intelligence
CV and Pattern Recognition
AI helps doctors understand surgery better.