Physics-Based Benchmarking Metrics for Multimodal Synthetic Images
By: Kishor Datta Gupta , Marufa Kamal , Md. Mahfuzur Rahman and more
Potential Business Impact:
Makes AI understand pictures and text better.
Current state of the art measures like BLEU, CIDEr, VQA score, SigLIP-2 and CLIPScore are often unable to capture semantic or structural accuracy, especially for domain-specific or context-dependent scenarios. For this, this paper proposes a Physics-Constrained Multimodal Data Evaluation (PCMDE) metric combining large language models with reasoning, knowledge based mapping and vision-language models to overcome these limitations. The architecture is comprised of three main stages: (1) feature extraction of spatial and semantic information with multimodal features through object detection and VLMs; (2) Confidence-Weighted Component Fusion for adaptive component-level validation; and (3) physics-guided reasoning using large language models for structural and relational constraints (e.g., alignment, position, consistency) enforcement.
Similar Papers
Bridging the Modality Gap by Similarity Standardization with Pseudo-Positive Samples
Computation and Language
Makes searching text and pictures together work better.
Bootstrapping Physics-Grounded Video Generation through VLM-Guided Iterative Self-Refinement
CV and Pattern Recognition
Makes videos follow real-world physics rules.
Multi-Physics: A Comprehensive Benchmark for Multimodal LLMs Reasoning on Chinese Multi-Subject Physics Problems
Computation and Language
Tests AI on Chinese physics problems.