Score: 0

Factors affecting the in-context learning abilities of LLMs for dialogue state tracking

Published: June 10, 2025 | arXiv ID: 2506.08753v1

By: Pradyoth Hegde , Santosh Kesiraju , Jan Švec and more

Potential Business Impact:

Helps computers understand what you're saying in chats.

Business Areas:
Semantic Search Internet Services

This study explores the application of in-context learning (ICL) to the dialogue state tracking (DST) problem and investigates the factors that influence its effectiveness. We use a sentence embedding based k-nearest neighbour method to retrieve the suitable demonstrations for ICL. The selected demonstrations, along with the test samples, are structured within a template as input to the LLM. We then conduct a systematic study to analyse the impact of factors related to demonstration selection and prompt context on DST performance. This work is conducted using the MultiWoZ2.4 dataset and focuses primarily on the OLMo-7B-instruct, Mistral-7B-Instruct-v0.3, and Llama3.2-3B-Instruct models. Our findings provide several useful insights on in-context learning abilities of LLMs for dialogue state tracking.

Country of Origin
🇨🇿 Czech Republic

Page Count
6 pages

Category
Computer Science:
Computation and Language