Abstract: Table-to-text generation task refers to converting tabular data into language text to facilitate easier understanding and analysis of the table. Recently, pre-trained models have made significant advancements in this kind of tasks. However, the inherent structural differences between tabular data and text, and the lack of domain-specific knowledge in few-shot datasets, make it challenging for pre-trained models to generate faithful text. To solve these problems, we proposed a framework that encodes tables by obtaining structural bias attention through pruning full self-attention, distinguishing the importance of cells from a structural perspective. We use the pre-trained model with the structural bias framework to the generation component of Prototype-to-Generation. To encourage prototype memory to adhere to the table content and generate more accurate and aligned sentences, we employ Reinforcement Learning. We conducted extensive experiments on three few-shot table datasets. Compared to previous advanced methods, our model achieved superior performance across multiple evaluation metrics.
Loading