Effectively obtaining acoustic, visual and textual data from videos
By: Jorge E. León, Miguel Carrasco
Potential Business Impact:
Creates new data for AI to learn from videos.
The increasing use of machine learning models has amplified the demand for high-quality, large-scale multimodal datasets. However, the availability of such datasets, especially those combining acoustic, visual and textual data, remains limited. This paper addresses this gap by proposing a method to extract related audio-image-text observations from videos. We detail the process of selecting suitable videos, extracting relevant data pairs, and generating descriptive texts using image-to-text models. Our approach ensures a robust semantic connection between modalities, enhancing the utility of the created datasets for various applications. We also discuss the challenges encountered and propose solutions to improve data quality. The resulting datasets, publicly available, aim to support and advance research in multimodal data analysis and machine learning.
Similar Papers
From Videos to Indexed Knowledge Graphs -- Framework to Marry Methods for Multimodal Content Analysis and Understanding
CV and Pattern Recognition
Makes computers understand videos better and learn.
Multi-modal video data-pipelines for machine learning with minimal human supervision
CV and Pattern Recognition
Lets computers understand videos and sounds together.
Learning to Highlight Audio by Watching Movies
CV and Pattern Recognition
Makes videos sound better by watching them.