TABLET: Table Structure Recognition using Encoder-only Transformers
By: Qiyu Hou, Jun Wang
Potential Business Impact:
Helps computers understand messy tables faster.
To address the challenges of table structure recognition, we propose a novel Split-Merge-based top-down model optimized for large, densely populated tables. Our approach formulates row and column splitting as sequence labeling tasks, utilizing dual Transformer encoders to capture feature interactions. The merging process is framed as a grid cell classification task, leveraging an additional Transformer encoder to ensure accurate and coherent merging. By eliminating unstable bounding box predictions, our method reduces resolution loss and computational complexity, achieving high accuracy while maintaining fast processing speed. Extensive experiments on FinTabNet and PubTabNet demonstrate the superiority of our model over existing approaches, particularly in real-world applications. Our method offers a robust, scalable, and efficient solution for large-scale table recognition, making it well-suited for industrial deployment.
Similar Papers
Structural Deep Encoding for Table Question Answering
Computation and Language
Helps computers understand charts and tables better.
TableCenterNet: A one-stage network for table structure recognition
CV and Pattern Recognition
Helps computers understand tables in any document.
Improving Deep Tabular Learning
Machine Learning (CS)
Helps computers learn from messy data better.