Conversational AI as a Coding Assistant: Understanding Programmers' Interactions with and Expectations from Large Language Models for Coding
By: Mehmet Akhoroz, Caglar Yildirim
Potential Business Impact:
Helps computers help people write code better.
Conversational AI interfaces powered by large language models (LLMs) are increasingly used as coding assistants. However, questions remain about how programmers interact with LLM-based conversational agents, the challenges they encounter, and the factors influencing adoption. This study investigates programmers' usage patterns, perceptions, and interaction strategies when engaging with LLM-driven coding assistants. Through a survey, participants reported both the benefits, such as efficiency and clarity of explanations, and the limitations, including inaccuracies, lack of contextual awareness, and concerns about over-reliance. Notably, some programmers actively avoid LLMs due to a preference for independent learning, distrust in AI-generated code, and ethical considerations. Based on our findings, we propose design guidelines for improving conversational coding assistants, emphasizing context retention, transparency, multimodal support, and adaptability to user preferences. These insights contribute to the broader understanding of how LLM-based conversational agents can be effectively integrated into software development workflows while addressing adoption barriers and enhancing usability.
Similar Papers
User Misconceptions of LLM-Based Conversational Programming Assistants
Human-Computer Interaction
Helps people know what AI coding tools can do.
Developer-LLM Conversations: An Empirical Study of Interactions and Generated Code Quality
Software Engineering
Helps computers write better code by fixing mistakes.
Conversational AI as a Catalyst for Informal Learning: An Empirical Large-Scale Study on LLM Use in Everyday Learning
Human-Computer Interaction
Helps people learn anything, anywhere, anytime.