Extracting Symbolic Sequences from Visual Representations via Self-Supervised Learning
By: Victor Sebastian Martinez Pozos, Ivan Vladimir Meza Ruiz
Potential Business Impact:
Teaches computers to understand pictures like words.
This paper explores the potential of abstracting complex visual information into discrete, structured symbolic sequences using self-supervised learning (SSL). Inspired by how language abstracts and organizes information to enable better reasoning and generalization, we propose a novel approach for generating symbolic representations from visual data. To learn these sequences, we extend the DINO framework to handle visual and symbolic information. Initial experiments suggest that the generated symbolic sequences capture a meaningful level of abstraction, though further refinement is required. An advantage of our method is its interpretability: the sequences are produced by a decoder transformer using cross-attention, allowing attention maps to be linked to specific symbols and offering insight into how these representations correspond to image regions. This approach lays the foundation for creating interpretable symbolic representations with potential applications in high-level scene understanding.
Similar Papers
Self-supervised structured object representation learning
CV and Pattern Recognition
Helps computers see objects in pictures better.
Symbol-Temporal Consistency Self-supervised Learning for Robust Time Series Classification
Machine Learning (CS)
Learns health patterns even with messy data.
Variational Self-Supervised Learning
Machine Learning (CS)
Teaches computers to learn from pictures without labels.