Do LLMs Encode Frame Semantics? Evidence from Frame Identification
By: Jayanth Krishna Chundru , Rudrashis Poddar , Jie Cao and more
Potential Business Impact:
Computers understand word meanings like people do.
We investigate whether large language models encode latent knowledge of frame semantics, focusing on frame identification, a core challenge in frame semantic parsing that involves selecting the appropriate semantic frame for a target word in context. Using the FrameNet lexical resource, we evaluate models under prompt-based inference and observe that they can perform frame identification effectively even without explicit supervision. To assess the impact of task-specific training, we fine-tune the model on FrameNet data, which substantially improves in-domain accuracy while generalizing well to out-of-domain benchmarks. Further analysis shows that the models can generate semantically coherent frame definitions, highlighting the model's internalized understanding of frame semantics.
Similar Papers
Mechanistic Interpretability of Socio-Political Frames in Language Models
Computation and Language
Helps computers understand how people think about politics.
Evaluating Large Language Models on the Frame and Symbol Grounding Problems: A Zero-shot Benchmark
Artificial Intelligence
Computers now understand tricky thinking problems.
Exploring How LLMs Capture and Represent Domain-Specific Knowledge
Machine Learning (CS)
Helps computers pick the best AI for each job.