NC-Bench: An LLM Benchmark for Evaluating Conversational Competence
By: Robert J. Moore , Sungeun An , Farhan Ahmed and more
The Natural Conversation Benchmark (NC-Bench) introduce a new approach to evaluating the general conversational competence of large language models (LLMs). Unlike prior benchmarks that focus on the content of model behavior, NC-Bench focuses on the form and structure of natural conversation. Grounded in the IBM Natural Conversation Framework (NCF), NC-Bench comprises three distinct sets. The Basic Conversation Competence set evaluates fundamental sequence management practices, such as answering inquiries, repairing responses, and closing conversational pairs. The RAG set applies the same sequence management patterns as the first set but incorporates retrieval-augmented generation (RAG). The Complex Request set extends the evaluation to complex requests involving more intricate sequence management patterns. Each benchmark tests a model's ability to produce contextually appropriate conversational actions in response to characteristic interaction patterns. Initial evaluations across 6 open-source models and 14 interaction patterns show that models perform well on basic answering tasks, struggle more with repair tasks (especially repeat), have mixed performance on closing sequences, and find complex multi-turn requests most challenging, with Qwen models excelling on the Basic set and Granite models on the RAG set and the Complex Request set. By operationalizing fundamental principles of human conversation, NC-Bench provides a lightweight, extensible, and theory-grounded framework for assessing and improving the conversational abilities of LLMs beyond topical or task-specific benchmarks.
Similar Papers
SI-Bench: Benchmarking Social Intelligence of Large Language Models in Human-to-Human Conversations
Computation and Language
Tests how well AI understands people talking.
Culturally-Aware Conversations: A Framework & Benchmark for LLMs
Computation and Language
Helps computers talk better with people everywhere.
VCB Bench: An Evaluation Benchmark for Audio-Grounded Large Language Model Conversational Agents
Sound
Tests how well AI understands spoken Chinese.