SMILE: A Composite Lexical-Semantic Metric for Question-Answering Evaluation
By: Shrikant Kendre , Austin Xu , Honglu Zhou and more
Potential Business Impact:
Checks answers better than humans can.
Traditional evaluation metrics for textual and visual question answering, like ROUGE, METEOR, and Exact Match (EM), focus heavily on n-gram based lexical similarity, often missing the deeper semantic understanding needed for accurate assessment. While measures like BERTScore and MoverScore leverage contextual embeddings to address this limitation, they lack flexibility in balancing sentence-level and keyword-level semantics and ignore lexical similarity, which remains important. Large Language Model (LLM) based evaluators, though powerful, come with drawbacks like high costs, bias, inconsistency, and hallucinations. To address these issues, we introduce SMILE: Semantic Metric Integrating Lexical Exactness, a novel approach that combines sentence-level semantic understanding with keyword-level semantic understanding and easy keyword matching. This composite method balances lexical precision and semantic relevance, offering a comprehensive evaluation. Extensive benchmarks across text, image, and video QA tasks show SMILE is highly correlated with human judgments and computationally lightweight, bridging the gap between lexical and semantic evaluation.
Similar Papers
The illusion of a perfect metric: Why evaluating AI's words is harder than it looks
Computation and Language
Helps AI write better by checking its work.
MORQA: Benchmarking Evaluation Metrics for Medical Open-Ended Question Answering
Computation and Language
Helps computers judge if medical answers are good.
Factual and Musical Evaluation Metrics for Music Language Models
Sound
Tests if music AI answers questions correctly.