iSHIFT: Lightweight Slow-Fast GUI Agent with Adaptive Perception
By: Sarthak Mehrotra , Sairam V C Rebbapragada , Mani Hemanth Reddy Bonthu and more
Potential Business Impact:
Helps computers understand and use apps better.
Multimodal Large Language Models (MLLMs) show strong potential for interpreting and interacting with complex, pixel-rich Graphical User Interface (GUI) environments. However, building agents that are both efficient for high-level tasks and precise for fine-grained interactions remains challenging. GUI agents must perform routine actions efficiently while also handling tasks that demand exact visual grounding, yet existing approaches struggle when accuracy depends on identifying specific interface elements. These MLLMs also remain large and cannot adapt their reasoning depth to the task at hand. In this work, we introduce iSHIFT: Implicit Slow-fast Hybrid Inference with Flexible Tokens, a lightweight agent that integrates latent thinking (implicit chain-of-thought) with a perception control module. iSHIFT enables an MLLM to switch between a slow mode, which leverages detailed visual grounding for high precision and a fast mode that uses global cues for efficiency. Special perception tokens guide attention to relevant screen regions, allowing the model to decide both how to reason and where to focus. Despite its compact 2.5B size, iSHIFT matches state-of-the-art performance on multiple benchmark datasets.
Similar Papers
UIShift: Enhancing VLM-based GUI Agents through Self-supervised Reinforcement Learning
Artificial Intelligence
Teaches computers to understand app screens without human help.
InfiGUIAgent: A Multimodal Generalist GUI Agent with Native Reasoning and Reflection
Artificial Intelligence
Helps computers do tasks by themselves.
UI-AGILE: Advancing GUI Agents with Effective Reinforcement Learning and Precise Inference-Time Grounding
Artificial Intelligence
Helps computers understand and use phone apps better.