Visual moral inference and communication
By: Warren Zhu, Aida Ramezani, Yang Xu
Potential Business Impact:
AI understands right and wrong from pictures.
Humans can make moral inferences from multiple sources of input. In contrast, automated moral inference in artificial intelligence typically relies on language models with textual input. However, morality is conveyed through modalities beyond language. We present a computational framework that supports moral inference from natural images, demonstrated in two related tasks: 1) inferring human moral judgment toward visual images and 2) analyzing patterns in moral content communicated via images from public news. We find that models based on text alone cannot capture the fine-grained human moral judgment toward visual stimuli, but language-vision fusion models offer better precision in visual moral inference. Furthermore, applications of our framework to news data reveal implicit biases in news categories and geopolitical discussions. Our work creates avenues for automating visual moral inference and discovering patterns of visual moral communication in public media.
Similar Papers
Artificial Intelligence Can Emulate Human Normative Judgments on Emotional Visual Scenes
Human-Computer Interaction
AI learns to feel emotions from pictures and words.
Beyond Human Judgment: A Bayesian Evaluation of LLMs' Moral Values Understanding
Computation and Language
AI spots bad behavior better than most people.
Mind with Eyes: from Language Reasoning to Multimodal Reasoning
Computation and Language
Computers understand pictures and words together.