Score: 0

To See or To Read: User Behavior Reasoning in Multimodal LLMs

Published: November 5, 2025 | arXiv ID: 2511.03845v1

By: Tianning Dong , Luyi Ma , Varun Vasudevan and more

Potential Business Impact:

Pictures help computers guess what you'll buy next.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Multimodal Large Language Models (MLLMs) are reshaping how modern agentic systems reason over sequential user-behavior data. However, whether textual or image representations of user behavior data are more effective for maximizing MLLM performance remains underexplored. We present \texttt{BehaviorLens}, a systematic benchmarking framework for assessing modality trade-offs in user-behavior reasoning across six MLLMs by representing transaction data as (1) a text paragraph, (2) a scatter plot, and (3) a flowchart. Using a real-world purchase-sequence dataset, we find that when data is represented as images, MLLMs next-purchase prediction accuracy is improved by 87.5% compared with an equivalent textual representation without any additional computational cost.

Page Count
14 pages

Category
Computer Science:
Artificial Intelligence