FreshTab: Sourcing Fresh Data for Table-to-Text Generation Evaluation
By: Kristýna Onderková , Ondřej Plátek , Zdeněk Kasner and more
Potential Business Impact:
Makes computers understand new information from tables.
Table-to-text generation (insight generation from tables) is a challenging task that requires precision in analyzing the data. In addition, the evaluation of existing benchmarks is affected by contamination of Large Language Model (LLM) training data as well as domain imbalance. We introduce FreshTab, an on-the-fly table-to-text benchmark generation from Wikipedia, to combat the LLM data contamination problem and enable domain-sensitive evaluation. While non-English table-to-text datasets are limited, FreshTab collects datasets in different languages on demand (we experiment with German, Russian and French in addition to English). We find that insights generated by LLMs from recent tables collected by our method appear clearly worse by automatic metrics, but this does not translate into LLM and human evaluations. Domain effects are visible in all evaluations, showing that a~domain-balanced benchmark is more challenging.
Similar Papers
Evaluating Latent Knowledge of Public Tabular Datasets in Large Language Models
Computation and Language
Computers might be cheating on tests.
TabReX : Tabular Referenceless eXplainable Evaluation
Computation and Language
Checks if computer-made tables are good.
Table as a Modality for Large Language Models
Computation and Language
Helps computers understand charts and tables better.