Score: 0

Exploring Generative Process Reward Modeling for Semi-Structured Data: A Case Study of Table Question Answering

Published: October 23, 2025 | arXiv ID: 2510.20304v1

By: Lei Tang, Wei Zhou, Mohsen Mesgar

Potential Business Impact:

Helps computers answer questions from tables better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Process reward models (PRMs) improve complex reasoning in large language models (LLMs) by grading candidate solutions step-by-step and selecting answers via aggregated step scores. While effective in domains such as mathematics, their applicability to tasks involving semi-structured data, like table question answering (TQA) remains unexplored. TQA poses unique challenges for PRMs, including abundant irrelevant information, loosely connected reasoning steps, and domain-specific reasoning. This work presents the first systematic study of PRMs for TQA. We evaluate state-of-the-art generative PRMs on TQA from both answer and step perspectives. Results show that PRMs that combine textual and code verification can aid solution selection but struggle to generalize to out-of-domain data. Analysis reveals a weak correlation between performance in step-level verification and answer accuracy, possibly stemming from weak step dependencies and loose causal links. Our findings highlight limitations of current PRMs on TQA and offer valuable insights for building more robust, process-aware verifiers.

Country of Origin
🇭🇰 Hong Kong

Page Count
14 pages

Category
Computer Science:
Computation and Language