Leveraging LLMs to Improve Human Annotation Efficiency with INCEpTION

Luís Filipe Cunha, Nana Yu, Purificação Silvano, Ricardo Campos, Alípio Jorge

Published: 01 Jan 2025, Last Modified: 06 Jan 2026CrossrefEveryoneRevisionsCC BY-SA 4.0
Abstract: Manual text annotation is a complex and time-consuming task. However, recent advancements demonstrate that such a task can be accelerated with automated pre-annotation. In this paper, we present a methodology to improve the efficiency of manual text annotation by leveraging LLMs for text pre-annotation. For this purpose, we train a BERT model for a token classification task and integrate it into the INCEpTION annotation tool to generate span-level suggestions for human annotators. To assess the usefulness of our approach, we conducted an experiment where an experienced linguist annotated plain text both with and without our model’s pre-annotations. Our results show that the model-assisted approach reduces annotation time by nearly 23%.
Loading