Fine-tuning BERT for Intelligent Software System Fault Classification

Published: 01 Jan 2024, Last Modified: 09 Feb 2025QRS Companion 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: As software systems increasingly evolve towards intelligence, a multitude of new failure modes continue to emerge. Faced with a rapidly accumulating amount of complex software failures from multiple sources, quickly determining their categories of faults is a crucial step in enhancing the efficiency of fault resolution, optimizing resource allocation, and continuously improving software quality. The semantic understanding afforded by neural network language models paves the way for swift, automated categorization. However, the direct application of NLP models to software fault classification tends to require supplementary data from the development process and suffers from limited classifier generalization. To address the challenge of generating universal representations for software fault diagnosis, we introduce a novel classification framework leveraging a pre-trained BERT model for nuanced global feature extraction. This model harnesses deep learning techniques, including feedforward and transformer encoder layers, to capture and utilize the rich semantic nuances and domain-specific knowledge crucial for fault categorization. Validation on an independent dataset demonstrates that our BERT-Transformer outperforms competing models, achieving the highest precision and F1 score at 80% and 78.7%, show-casing its superior accuracy and robust generalization in fault classification tasks.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview