Benchmarking Image Similarity Metrics for Novel View Synthesis Applications
By: Charith Wickrema , Sara Leary , Shivangi Sarkar and more
Potential Business Impact:
Helps computers judge if fake pictures look real.
Traditional image similarity metrics are ineffective at evaluating the similarity between a real image of a scene and an artificially generated version of that viewpoint [6, 9, 13, 14]. Our research evaluates the effectiveness of a new, perceptual-based similarity metric, DreamSim [2], and three popular image similarity metrics: Structural Similarity (SSIM), Peak Signal-to-Noise Ratio (PSNR), and Learned Perceptual Image Patch Similarity (LPIPS) [18, 19] in novel view synthesis (NVS) applications. We create a corpus of artificially corrupted images to quantify the sensitivity and discriminative power of each of the image similarity metrics. These tests reveal that traditional metrics are unable to effectively differentiate between images with minor pixel-level changes and those with substantial corruption, whereas DreamSim is more robust to minor defects and can effectively evaluate the high-level similarity of the image. Additionally, our results demonstrate that DreamSim provides a more effective and useful evaluation of render quality, especially for evaluating NVS renders in real-world use cases where slight rendering corruptions are common, but do not affect image utility for human tasks.
Similar Papers
Appreciate the View: A Task-Aware Evaluation Framework for Novel View Synthesis
CV and Pattern Recognition
Checks if computer-made pictures look real.
A Novel Image Similarity Metric for Scene Composition Structure
CV and Pattern Recognition
Checks if AI pictures keep their real-world shapes.
A Novel Image Similarity Metric for Scene Composition Structure
CV and Pattern Recognition
Checks if AI images keep their real-world shapes.