Beyond Accuracy Optimization: Computer Vision Losses for Large Language Model Fine-Tuning

ACL ARR 2024 June Submission711 Authors

12 Jun 2024 (modified: 27 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models (LLMs) have demonstrated impressive performance across various tasks. However, current training approaches combine standard cross-entropy loss with extensive data, human feedback, or ad hoc methods to enhance performance. These solutions are often not scalable or feasible due to their associated costs, complexity, or resource requirements. This study investigates the use of established semantic segmentation loss functions in natural language generation to create a versatile, practical, and scalable solution for fine-tuning different architectures. We evaluate their effectiveness in solving Math Word Problems and question answering across different models of varying sizes. For the analyzed tasks, we found that the traditional Cross-Entropy loss represents a sub-optimal choice, while models trained to minimize alternative (task-dependent) losses, such as Focal or Lovász, achieve a mean improvement of +42\% on exact match without requiring additional data or human feedback. These findings suggest a promising pathway for more efficient and accessible training processes.
Paper Type: Long
Research Area: Special Theme (conference specific)
Research Area Keywords: Efficient/Low-Resource Methods for NLP, Generation, Language Modeling, Machine Learning for NLP, NLP Applications, Question Answering, Efficiency in Model Algorithms Training and Inference
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 711
Loading