Score: 1

Augmented Vision-Language Models: A Systematic Review

Published: July 24, 2025 | arXiv ID: 2507.22933v1

By: Anthony C Davis , Burhan Sadiq , Tianmin Shu and more

BigTech Affiliations: Johns Hopkins University

Potential Business Impact:

Helps computers explain *why* they see things.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Recent advances in visual-language machine learning models have demonstrated exceptional ability to use natural language and understand visual scenes by training on large, unstructured datasets. However, this training paradigm cannot produce interpretable explanations for its outputs, requires retraining to integrate new information, is highly resource-intensive, and struggles with certain forms of logical reasoning. One promising solution involves integrating neural networks with external symbolic information systems, forming neural symbolic systems that can enhance reasoning and memory abilities. These neural symbolic systems provide more interpretable explanations to their outputs and the capacity to assimilate new information without extensive retraining. Utilizing powerful pre-trained Vision-Language Models (VLMs) as the core neural component, augmented by external systems, offers a pragmatic approach to realizing the benefits of neural-symbolic integration. This systematic literature review aims to categorize techniques through which visual-language understanding can be improved by interacting with external symbolic information systems.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
39 pages

Category
Computer Science:
Computation and Language