LLMAD-mini: Efficient Distilling Hierarchical Chain-of-Thought for Interpretable Log Anomaly Reasoning and Detection using Large Language Model

17 Sept 2025 (modified: 30 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Chain-of-Thought, Knowledge Distillation, Large Language Model, Anomaly Detection
Abstract: Log anomaly detection is critical for system reliability, yet most existing methods focus only on binary detection without providing explanations or identifying root causes, which limits their usefulness in production environments. To address these challenges, we propose LLMAD-mini, a lightweight LLM-based model that combines knowledge distillation with Low-Rank Adaptation (LoRA) fine-tuning to deliver strong reasoning and comprehensive log understanding. Large language models (LLMs) with human-interpretable descriptions can be tuned for specialized logs via supervised fine-tuning, but the high cost of training and deployment remains a major barrier. To achieve efficient adaption on small in-domain dataset on LLMs, we introduce a hierarchical Chain-of-Thought mechanism that significantly enhances reasoning capability with limited data. Evaluated on different system log datasets, LLMAD-mini surpasses traditional anomaly detection methods in detection accuracy and provides far better reasoning than much larger LLMs. Notably, it achieves a 3.2× improvement on reasoning quality compared to a LLM with 30× more parameters. Furthermore, our experiments on out-of-domain logs demonstrate LLMAD-mini’s ability to generalize across diverse systems with the improvement of 40% of accuracy on anomaly detection and improve the Bleu-4 from 0.01 to 0.49 while diagnosing failures, making it a practical and efficient solution for real-world deployment.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 9823
Loading