Optimizing Multimodal LLMs for Egocentric Video Understanding: A Solution for the HD-EPIC VQA Challenge
By: Sicheng Yang , Yukai Huang , Shitong Sun and more
Multimodal Large Language Models (MLLMs) struggle with complex video QA benchmarks like HD-EPIC VQA due to ambiguous queries/options, poor long-range temporal reasoning, and non-standardized outputs. We propose a framework integrating query/choice pre-processing, domain-specific Qwen2.5-VL fine-tuning, a novel Temporal Chain-of-Thought (T-CoT) prompting for multi-step reasoning, and robust post-processing. This system achieves 41.6% accuracy on HD-EPIC VQA, highlighting the need for holistic pipeline optimization in demanding video understanding. Our code, fine-tuned models are available at https://github.com/YoungSeng/Egocentric-Co-Pilot.
Similar Papers
Advancing Egocentric Video Question Answering with Multimodal Large Language Models
CV and Pattern Recognition
Helps computers understand videos from a person's eyes.
Episodic Memory Representation for Long-form Video Understanding
CV and Pattern Recognition
Helps computers understand long videos better.
Team of One: Cracking Complex Video QA with Model Synergy
CV and Pattern Recognition
Helps computers understand videos better.