Rethinking NLU Text Classification for Long Documents and Resource-Constrained Models

ACL ARR 2025 July Submission273 Authors

26 Jul 2025 (modified: 19 Aug 2025)ACL ARR 2025 July SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Encoder models have excelled at Natural Language Understanding (NLU) classification tasks for shorter texts. As recent datasets for NLU tasks, such as Sentiment Analysis, increasingly involve longer texts, the traditional 512-token context window of encoder models poses a challenge. To address this, we present sentence-level text selection methods, including heuristics and learned models, that enable context-limited encoders to effectively process longer documents while maintaining computational efficiency. Concurrently, we seek to optimize sub-10B parameter decoder models for NLU classification tasks in resource-constrained settings. We propose applying the pairwise comparison training method for such tasks, adapting the Bradley-Terry model, which significantly enhances model performance. Our evaluation primarily on the Norwegian Entity-Level Sentiment Analysis (ELSA) dataset, featuring texts with a mean length of 650 tokens, and on Norwegian and English EuroEval benchmarks, validates our approaches. Results show that text selection reduces training times by half and improves performance for encoder models on the longer document ELSA task. Furthermore, pairwise comparison training enables gemma-2-9b to achieve 83.3% weighted F1 on ELSA and establishes new performance benchmarks for sub-10B models with the EuroEval NLU classification datasets for sentiment analysis and linguistic acceptability.
Paper Type: Long
Research Area: Sentiment Analysis, Stylistic Analysis, and Argument Mining
Research Area Keywords: argument mining,stance detection,style analysis
Contribution Types: NLP engineering experiment, Approaches to low-resource settings, Publicly available software and/or pre-trained models
Languages Studied: Norwegian, English
Reassignment Request Area Chair: This is not a resubmission
Reassignment Request Reviewers: This is not a resubmission
A1 Limitations Section: This paper has a limitations section.
A2 Potential Risks: Yes
A2 Elaboration: 8
B Use Or Create Scientific Artifacts: Yes
B1 Cite Creators Of Artifacts: Yes
B1 Elaboration: Appendix D
B2 Discuss The License For Artifacts: No
B2 Elaboration: Commonly used, widespread artifacts
B3 Artifact Use Consistent With Intended Use: N/A
B3 Elaboration: existing artifacts: commonly used, publicly available. INtended use of our artifacts: Section 8
B4 Data Contains Personally Identifying Info Or Offensive Content: N/A
B4 Elaboration: No new data collected. Openly available datasets. Extensive inspection of datasets did not reveal any new information not available to the public elsewhere.
B5 Documentation Of Artifacts: Yes
B5 Elaboration: Datasets used: Section 3 with their cited papers for all datasets used.
B6 Statistics For Data: Yes
B6 Elaboration: Section 3, Table 1
C Computational Experiments: Yes
C1 Model Size And Budget: Yes
C1 Elaboration: parameters: 5, GPU memory budget: 1, Training time examples: Figure 1 and Table 2 caption.
C2 Experimental Setup And Hyperparameters: Yes
C2 Elaboration: 5.4, Appendix D
C3 Descriptive Statistics: Yes
C3 Elaboration: Section 5, Tables 2 and 3, Figure 4
C4 Parameters For Packages: No
C4 Elaboration: Standard packages. We revealed the url for the anonymized repo where furter training specifics will be posted
D Human Subjects Including Annotators: No
D1 Instructions Given To Participants: N/A
D2 Recruitment And Payment: N/A
D3 Data Consent: N/A
D4 Ethics Review Board Approval: N/A
D5 Characteristics Of Annotators: N/A
E Ai Assistants In Research Or Writing: Yes
E1 Information About Use Of Ai Assistants: Yes
E1 Elaboration: 8
Author Submission Checklist: yes
Submission Number: 273
Loading