With Great Context Comes Great Prediction Power: Classifying Objects via Geo-Semantic Scene Graphs
By: Ciprian Constantinescu, Marius Leordeanu
Potential Business Impact:
Helps computers understand what objects are by their surroundings.
Humans effortlessly identify objects by leveraging a rich understanding of the surrounding scene, including spatial relationships, material properties, and the co-occurrence of other objects. In contrast, most computational object recognition systems operate on isolated image regions, devoid of meaning in isolation, thus ignoring this vital contextual information. This paper argues for the critical role of context and introduces a novel framework for contextual object classification. We first construct a Geo-Semantic Contextual Graph (GSCG) from a single monocular image. This rich, structured representation is built by integrating a metric depth estimator with a unified panoptic and material segmentation model. The GSCG encodes objects as nodes with detailed geometric, chromatic, and material attributes, and their spatial relationships as edges. This explicit graph structure makes the model's reasoning process inherently interpretable. We then propose a specialized graph-based classifier that aggregates features from a target object, its immediate neighbors, and the global scene context to predict its class. Through extensive ablation studies, we demonstrate that our context-aware model achieves a classification accuracy of 73.4%, dramatically outperforming context-agnostic versions (as low as 38.4%). Furthermore, our GSCG-based approach significantly surpasses strong baselines, including fine-tuned ResNet models (max 53.5%) and a state-of-the-art multimodal Large Language Model (LLM), Llama 4 Scout, which, even when given the full image alongside a detailed description of objects, maxes out at 42.3%. These results on COCO 2017 train/val splits highlight the superiority of explicitly structured and interpretable context for object recognition tasks.
Similar Papers
Towards 3D Object-Centric Feature Learning for Semantic Scene Completion
CV and Pattern Recognition
Helps self-driving cars see objects better.
Edge-Centric Relational Reasoning for 3D Scene Graph Prediction
CV and Pattern Recognition
Helps computers understand 3D scenes better.
From Pixels to Predicates Structuring urban perception with scene graphs
CV and Pattern Recognition
Makes computers understand how people feel about places.