"I Know It When I See It": Mood Spaces for Connecting and Expressing Visual Concepts
By: Huzheng Yang , Katherine Xu , Michael D. Grossberg and more
Potential Business Impact:
Makes computers understand feelings from pictures.
Expressing complex concepts is easy when they can be labeled or quantified, but many ideas are hard to define yet instantly recognizable. We propose a Mood Board, where users convey abstract concepts with examples that hint at the intended direction of attribute changes. We compute an underlying Mood Space that 1) factors out irrelevant features and 2) finds the connections between images, thus bringing relevant concepts closer. We invent a fibration computation to compress/decompress pre-trained features into/from a compact space, 50-100x smaller. The main innovation is learning to mimic the pairwise affinity relationship of the image tokens across exemplars. To focus on the coarse-to-fine hierarchical structures in the Mood Space, we compute the top eigenvector structure from the affinity matrix and define a loss in the eigenvector space. The resulting Mood Space is locally linear and compact, allowing image-level operations, such as object averaging, visual analogy, and pose transfer, to be performed as a simple vector operation in Mood Space. Our learning is efficient in computation without any fine-tuning, needs only a few (2-20) exemplars, and takes less than a minute to learn.
Similar Papers
Vibe Spaces for Creatively Connecting and Expressing Visual Concepts
CV and Pattern Recognition
Makes AI create new images by blending ideas.
Emotions Where Art Thou: Understanding and Characterizing the Emotional Latent Space of Large Language Models
Computation and Language
Teaches computers to understand and change feelings.
Bridging the behavior-neural gap: A multimodal AI reveals the brain's geometry of emotion more accurately than human self-reports
Human-Computer Interaction
AI understands feelings better than people's words.