VIST-GPT: Ushering in the Era of Visual Storytelling with LLMs?
By: Mohamed Gado , Towhid Taliee , Muhammad Memon and more
Potential Business Impact:
Computers write stories from pictures.
Visual storytelling is an interdisciplinary field combining computer vision and natural language processing to generate cohesive narratives from sequences of images. This paper presents a novel approach that leverages recent advancements in multimodal models, specifically adapting transformer-based architectures and large multimodal models, for the visual storytelling task. Leveraging the large-scale Visual Storytelling (VIST) dataset, our VIST-GPT model produces visually grounded, contextually appropriate narratives. We address the limitations of traditional evaluation metrics, such as BLEU, METEOR, ROUGE, and CIDEr, which are not suitable for this task. Instead, we utilize RoViST and GROOVIST, novel reference-free metrics designed to assess visual storytelling, focusing on visual grounding, coherence, and non-redundancy. These metrics provide a more nuanced evaluation of narrative quality, aligning closely with human judgment.
Similar Papers
From Image Captioning to Visual Storytelling
Computation and Language
Makes computers tell stories from pictures.
SPoRC-VIST: A Benchmark for Evaluating Generative Natural Narrative in Vision-Language Models
Machine Learning (CS)
Creates podcasts from pictures with talking characters.
ViSTA: Visual Storytelling using Multi-modal Adapters for Text-to-Image Diffusion Models
CV and Pattern Recognition
Makes stories with pictures that make sense.