How to Detect and Defeat Molecular Mirage: A Metric-Driven Benchmark for Hallucination in LLM-based Molecular Comprehension
By: Hao Li , Liuzhenghao Lv , He Cao and more
Potential Business Impact:
Fixes AI mistakes in drug design.
Large language models are increasingly used in scientific domains, especially for molecular understanding and analysis. However, existing models are affected by hallucination issues, resulting in errors in drug design and utilization. In this paper, we first analyze the sources of hallucination in LLMs for molecular comprehension tasks, specifically the knowledge shortcut phenomenon observed in the PubChem dataset. To evaluate hallucination in molecular comprehension tasks with computational efficiency, we introduce \textbf{Mol-Hallu}, a novel free-form evaluation metric that quantifies the degree of hallucination based on the scientific entailment relationship between generated text and actual molecular properties. Utilizing the Mol-Hallu metric, we reassess and analyze the extent of hallucination in various LLMs performing molecular comprehension tasks. Furthermore, the Hallucination Reduction Post-processing stage~(HRPP) is proposed to alleviate molecular hallucinations, Experiments show the effectiveness of HRPP on decoder-only and encoder-decoder molecular LLMs. Our findings provide critical insights into mitigating hallucination and improving the reliability of LLMs in scientific applications.
Similar Papers
Evaluating Evaluation Metrics -- The Mirage of Hallucination Detection
Computation and Language
Makes AI less likely to make up facts.
The Illusion of Progress: Re-evaluating Hallucination Detection in LLMs
Computation and Language
Fixes AI mistakes that humans can't see.
HalluLens: LLM Hallucination Benchmark
Computation and Language
Stops AI from making up fake answers.