Router-Suggest: Dynamic Routing for Multimodal Auto-Completion in Visually-Grounded Dialogs
By: Sandeep Mishra , Devichand Budagam , Anubhab Mandal and more
Potential Business Impact:
Helps computers guess words by seeing and reading.
Real-time multimodal auto-completion is essential for digital assistants, chatbots, design tools, and healthcare consultations, where user inputs rely on shared visual context. We introduce Multimodal Auto-Completion (MAC), a task that predicts upcoming characters in live chats using partially typed text and visual cues. Unlike traditional text-only auto-completion (TAC), MAC grounds predictions in multimodal context to better capture user intent. To enable this task, we adapt MMDialog and ImageChat to create benchmark datasets. We evaluate leading vision-language models (VLMs) against strong textual baselines, highlighting trade-offs in accuracy and efficiency. We present Router-Suggest, a router framework that dynamically selects between textual models and VLMs based on dialog context, along with a lightweight variant for resource-constrained environments. Router-Suggest achieves a 2.3x to 10x speedup over the best-performing VLM. A user study shows that VLMs significantly excel over textual models on user satisfaction, notably saving user typing effort and improving the quality of completions in multi-turn conversations. These findings underscore the need for multimodal context in auto-completions, leading to smarter, user-aware assistants.
Similar Papers
Towards Resource-Efficient Multimodal Intelligence: Learned Routing among Specialized Expert Models
Computation and Language
Smart AI uses the right tool for each job.
Enhancing Vision-Language Models for Autonomous Driving through Task-Specific Prompting and Spatial Reasoning
CV and Pattern Recognition
Helps self-driving cars understand roads better.
ECVL-ROUTER: Scenario-Aware Routing for Vision-Language Models
Machine Learning (CS)
Smartly picks the right AI for speed or quality.