Information Extraction from Conversation Transcripts: Neuro-Symbolic vs. LLM
By: Alice Saebom Kwak , Maria Alexeeva , Gus Hahn-Powell and more
Potential Business Impact:
Helps computers understand farm talk better.
The current trend in information extraction (IE) is to rely extensively on large language models, effectively discarding decades of experience in building symbolic or statistical IE systems. This paper compares a neuro-symbolic (NS) and an LLM-based IE system in the agricultural domain, evaluating them on nine interviews across pork, dairy, and crop subdomains. The LLM-based system outperforms the NS one (F1 total: 69.4 vs. 52.7; core: 63.0 vs. 47.2), where total includes all extracted information and core focuses on essential details. However, each system has trade-offs: the NS approach offers faster runtime, greater control, and high accuracy in context-free tasks but lacks generalizability, struggles with contextual nuances, and requires significant resources to develop and maintain. The LLM-based system achieves higher performance, faster deployment, and easier maintenance but has slower runtime, limited control, model dependency and hallucination risks. Our findings highlight the "hidden cost" of deploying NLP systems in real-world applications, emphasizing the need to balance performance, efficiency, and control.
Similar Papers
LLM-Augmented Symbolic NLU System for More Reliable Continuous Causal Statement Interpretation
Computation and Language
Makes computers understand science facts better.
Advancing Symbolic Integration in Large Language Models: Beyond Conventional Neurosymbolic AI
Artificial Intelligence
Makes smart computer answers easier to understand.
Evolving Paradigms in Task-Based Search and Learning: A Comparative Analysis of Traditional Search Engine with LLM-Enhanced Conversational Search System
Information Retrieval
Helps people learn better using smart computer answers.