DQA: Diagnostic Question Answering for IT Support

Published: 18 Apr 2026, Last Modified: 30 Apr 2026ACL 2026 Industry Track PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: diagnostic question answering, retrieval-augmented generation, diagnostic state, evidence aggregation, multi-turn dialogue, root cause analysis, enterprise IT support, conversational retrieval
TL;DR: DQA introduces explicit diagnostic state and root-cause–level evidence aggregation for multi-turn RAG, improving enterprise IT troubleshooting success from 41.3% to 78.7% (90.6% relative gain) while reducing interaction turns by 53.6%.
Abstract: Enterprise IT support interactions are fundamentally diagnostic: effective resolution requires iterative evidence gathering from ambiguous user reports to identify an underlying root cause. While retrieval-augmented generation (RAG) provides grounding through historical cases, standard multi-turn RAG systems lack explicit diagnostic state and therefore struggle to accumulate evidence and resolve competing hypotheses across turns. We introduce DQA, a diagnostic question-answering framework that maintains persistent diagnostic state and aggregates retrieved cases at the level of root causes rather than individual documents. DQA combines conversational query rewriting, retrieval aggregation, and state-conditioned response generation to support systematic troubleshooting under enterprise latency and context constraints. We evaluate DQA on 150 anonymized enterprise IT support scenarios using a replay-based protocol. Averaged over three independent runs, DQA achieves a 78.7\% success rate under a trajectory-level success criterion, compared to 41.3\% for a multi-turn RAG baseline, while reducing average turns from 8.4 to 3.9. This improvement reflects the benefit of explicitly representing competing explanations and aggregating evidence across turns in unscripted troubleshooting.
Submission Type: Emerging
Copyright Form: pdf
Submission Number: 256
Loading