SaLSA-RAG: State-and-Law Summary Aligned Retrieval-Augmented Generation for Conversational Legal Advice
Keywords: Conversational legal advice, Retrieval-augmented generation, Conversational retrieval, Reranking
Abstract: Conversational legal advice must generate grounded answers under evolving multi-turn context, where the key challenge is to retrieve statutes that are legally applicable rather than merely topically similar. Standard retrieval-augmented generation typically relies on a single query view, which can surface lexically plausible yet inapplicable evidence. We propose SaLSA-RAG, a State-and-Law Summary Aligned framework for multi-turn legal consultation. At each turn, SaLSA-RAG builds (i) a history-aware retrieval query from the current utterance and user-only dialogue history, and (ii) a concise legal analysis state that captures parties, salient facts, procedural posture, and the sub-issue to resolve. A dense retriever retrieves candidate statutes, and SaLSA-Reranker aligns the query and induced state with applicability-oriented statute summaries to score and select evidence for generation. On the Chinese LexRAG benchmark, SaLSA-RAG improves downstream answer quality, raising micro keyword recall from 0.286 to 0.370 and the overall LLM-judge score from 5.13 to 5.63. It also improves retrieval quality: with the default dense encoder, test nDCG@10 increases from 0.1143 to 0.1798, and reaches 0.2109 with a stronger embedding model.
Paper Type: Long
Research Area: Retrieval-Augmented Language Models
Research Area Keywords: retrieval-augmented generation, conversational QA, legal NLP
Contribution Types: NLP engineering experiment
Languages Studied: Chinese
Submission Number: 4315
Loading