MIRA: Empowering One-Touch AI Services on Smartphones with MLLM-based Instruction Recommendation
By: Zhipeng Bian , Jieming Zhu , Xuyang Xie and more
Potential Business Impact:
Lets your phone suggest what AI to use.
The rapid advancement of generative AI technologies is driving the integration of diverse AI-powered services into smartphones, transforming how users interact with their devices. To simplify access to predefined AI services, this paper introduces MIRA, a pioneering framework for task instruction recommendation that enables intuitive one-touch AI tasking on smartphones. With MIRA, users can long-press on images or text objects to receive contextually relevant instruction recommendations for executing AI tasks. Our work introduces three key innovations: 1) A multimodal large language model (MLLM)-based recommendation pipeline with structured reasoning to extract key entities, infer user intent, and generate precise instructions; 2) A template-augmented reasoning mechanism that integrates high-level reasoning templates, enhancing task inference accuracy; 3) A prefix-tree-based constrained decoding strategy that restricts outputs to predefined instruction candidates, ensuring coherent and intent-aligned suggestions. Through evaluation using a real-world annotated datasets and a user study, MIRA has demonstrated substantial improvements in the accuracy of instruction recommendation. The encouraging results highlight MIRA's potential to revolutionize the way users engage with AI services on their smartphones, offering a more seamless and efficient experience.
Similar Papers
MIRA: Multimodal Iterative Reasoning Agent for Image Editing
CV and Pattern Recognition
Makes computer art follow your exact words.
LLMAID: Identifying AI Capabilities in Android Apps with LLMs
Software Engineering
Finds hidden AI features in phone apps.
LLMAID: Identifying AI Capabilities in Android Apps with LLMs
Software Engineering
Finds hidden AI features in phone apps.