Cluster Purge Loss: Structuring Transformer Embeddings for Equivalent Mutants Detection

ACL ARR 2025 February Submission5126 Authors

16 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Recent pre-trained transformer models achieve superior performance in various code processing objectives. However, although effective at optimizing decision boundaries, common approaches for fine-tuning them for downstream classification tasks — distance-based methods or training an additional classification head — often fail to thoroughly structure the embedding space to reflect nuanced intra-class semantic relationships. Equivalent code mutant detection is one of these tasks, where the quality of the embedding space is crucial to the performance of the models. We introduce a novel framework that integrates cross-entropy loss with a deep metric learning objective, termed Cluster Purge Loss. This objective, unlike conventional approaches, concentrates on adjusting fine-grained differences within each class, encouraging the separation of instances based on semantical equivalency to the class center using dynamically adjusted borders. Employing UniXCoder as the base model, our approach demonstrates state-of-the-art performance in the domain of equivalent mutant detection and produces a more interpretable embedding space.
Paper Type: Short
Research Area: Machine Learning for NLP
Research Area Keywords: representation learning, contrastive learning, transfer learning / domain adaptation
Contribution Types: NLP engineering experiment
Languages Studied: Java code
Submission Number: 5126
Loading