LLM driven Text-to-Table Generation through Sub-Tasks Guidance and Iterative Refinement
By: Rajmohan C, Sarthak Harne, Arvind Agarwal
Potential Business Impact:
Helps computers turn messy notes into organized charts.
Transforming unstructured text into structured data is a complex task, requiring semantic understanding, reasoning, and structural comprehension. While Large Language Models (LLMs) offer potential, they often struggle with handling ambiguous or domain-specific data, maintaining table structure, managing long inputs, and addressing numerical reasoning. This paper proposes an efficient system for LLM-driven text-to-table generation that leverages novel prompting techniques. Specifically, the system incorporates two key strategies: breaking down the text-to-table task into manageable, guided sub-tasks and refining the generated tables through iterative self-feedback. We show that this custom task decomposition allows the model to address the problem in a stepwise manner and improves the quality of the generated table. Furthermore, we discuss the benefits and potential risks associated with iterative self-feedback on the generated tables while highlighting the trade-offs between enhanced performance and computational cost. Our methods achieve strong results compared to baselines on two complex text-to-table generation datasets available in the public domain.
Similar Papers
An LLM-Based Approach for Insight Generation in Data Analysis
Artificial Intelligence
Finds hidden patterns in data automatically.
Tabular Data Understanding with LLMs: A Survey of Recent Advances and Challenges
Computation and Language
Helps computers understand all kinds of tables.
A Note on Statistically Accurate Tabular Data Generation Using Large Language Models
Machine Learning (CS)
Makes fake computer data more like real data.