Abstract: Table annotation is crucial for making web and
enterprise tables usable in downstream NLP
applications. Unlike textual data where learning semantically rich token or sentence embeddings often suffice, tables are structured combinations of columns wherein useful representations must jointly capture column’s semantics
and the inter-column relationships. Existing
models learn by linearizing the 2D table into
a 1D token sequence and encoding it with pretrained language models (PLMs) such as BERT.
However, this leads to limited semantic quality
and weaker generalization to unseen or rare values compared to modern LLMs, and degraded
structural modeling due to 2D-to-1D flattening and context-length constraints. We propose
TabEmb, which directly targets these limitations by decoupling semantic encoding from
structural modeling. An LLM first produces
semantically rich embeddings for each column, and a graph-based module over columns
then injects relationships into the embeddings,
yielding joint semantic–structural representations for table annotation. Experiments show
that TabEmb consistently outperforms strong
baselines on different table annotation tasks.
Source code and datasets are available at
https://github.com/hoseinzadeehsan/TabEmb
Loading