Does CLIP perceive art the same way we do?
By: Andrea Asperti , Leonardo Dessì , Maria Chiara Tonetti and more
Potential Business Impact:
Teaches computers to understand art like people.
CLIP has emerged as a powerful multimodal model capable of connecting images and text through joint embeddings, but to what extent does it "see" the same way humans do - especially when interpreting artworks? In this paper, we investigate CLIP's ability to extract high-level semantic and stylistic information from paintings, including both human-created and AI-generated imagery. We evaluate its perception across multiple dimensions: content, scene understanding, artistic style, historical period, and the presence of visual deformations or artifacts. By designing targeted probing tasks and comparing CLIP's responses to human annotations and expert benchmarks, we explore its alignment with human perceptual and contextual understanding. Our findings reveal both strengths and limitations in CLIP's visual representations, particularly in relation to aesthetic cues and artistic intent. We further discuss the implications of these insights for using CLIP as a guidance mechanism during generative processes, such as style transfer or prompt-based image synthesis. Our work highlights the need for deeper interpretability in multimodal systems, especially when applied to creative domains where nuance and subjectivity play a central role.
Similar Papers
Don't Judge Before You CLIP: A Unified Approach for Perceptual Tasks
CV and Pattern Recognition
Lets computers understand how people feel about pictures.
DesignCLIP: Multimodal Learning with CLIP for Design Patent Understanding
CV and Pattern Recognition
Helps find and sort design ideas faster.
Generalizable Prompt Learning of CLIP: A Brief Overview
CV and Pattern Recognition
Teaches computers to understand pictures and words.