A Hybrid Search for Complex Table Question Answering in Securities Report
By: Daiki Shirafuji, Koji Tanaka, Tatsuhiko Saito
Potential Business Impact:
Helps computers understand tables to answer questions.
Recently, Large Language Models (LLMs) are gaining increased attention in the domain of Table Question Answering (TQA), particularly for extracting information from tables in documents. However, directly entering entire tables as long text into LLMs often leads to incorrect answers because most LLMs cannot inherently capture complex table structures. In this paper, we propose a cell extraction method for TQA without manual identification, even for complex table headers. Our approach estimates table headers by computing similarities between a given question and individual cells via a hybrid retrieval mechanism that integrates a language model and TF-IDF. We then select as the answer the cells at the intersection of the most relevant row and column. Furthermore, the language model is trained using contrastive learning on a small dataset of question-header pairs to enhance performance. We evaluated our approach in the TQA dataset from the U4 shared task at NTCIR-18. The experimental results show that our pipeline achieves an accuracy of 74.6\%, outperforming existing LLMs such as GPT-4o mini~(63.9\%). In the future, although we used traditional encoder models for retrieval in this study, we plan to incorporate more efficient text-search models to improve performance and narrow the gap with human evaluation results.
Similar Papers
Agentic LLMs for Question Answering over Tabular Data
Computation and Language
Answers questions from complex tables using smart computer language.
Table as a Modality for Large Language Models
Computation and Language
Helps computers understand charts and tables better.
Improving Table Understanding with LLMs and Entity-Oriented Search
Computation and Language
Helps computers understand information in tables better.