Generative Interfaces for Language Models
By: Jiaqi Chen , Yanzhe Zhang , Yutong Zhang and more
Potential Business Impact:
Computers build helpful screens to answer questions.
Large language models (LLMs) are increasingly seen as assistants, copilots, and consultants, capable of supporting a wide range of tasks through natural conversation. However, most systems remain constrained by a linear request-response format that often makes interactions inefficient in multi-turn, information-dense, and exploratory tasks. To address these limitations, we propose Generative Interfaces for Language Models, a paradigm in which LLMs respond to user queries by proactively generating user interfaces (UIs) that enable more adaptive and interactive engagement. Our framework leverages structured interface-specific representations and iterative refinements to translate user queries into task-specific UIs. For systematic evaluation, we introduce a multidimensional assessment framework that compares generative interfaces with traditional chat-based ones across diverse tasks, interaction patterns, and query types, capturing functional, interactive, and emotional aspects of user experience. Results show that generative interfaces consistently outperform conversational ones, with humans preferring them in over 70% of cases. These findings clarify when and why users favor generative interfaces, paving the way for future advancements in human-AI interaction.
Similar Papers
Designing Effective LLM-Assisted Interfaces for Curriculum Development
Computers and Society
Makes AI easier for teachers to use.
Experimental Analysis of Productive Interaction Strategy with ChatGPT: User Study on Function and Project-level Code Generation Tasks
Software Engineering
Helps computers write better code, faster.
InterChat: Enhancing Generative Visual Analytics using Multimodal Interactions
Human-Computer Interaction
Lets computers understand your data questions better.