AGR: Reinforced Causal Agent-Guided Self-explaining Rationalization

Published: 12 Aug 2024, Last Modified: 30 Sept 2024Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL2024)EveryoneRevisionsCC BY 4.0
Abstract: Most existing rationalization approaches are susceptible to degeneration accumulation due to a lack of effective control over the learning direction of the model during training. To address this issue, we propose a novel approach AGR (Agent-Guided Rationalization), guiding the next action of the model based on its current training state. Specifically, we introduce causal intervention calculus to quantify the causal effects inherent during rationale training, and utilize reinforcement learning process to refine the learning bias of them. Furthermore, we pretrain an agent within this reinforced causal environment to guide the next step of the model. We theoretically demonstrate that a good model needs the desired guidance, and empirically show the effectiveness of our approach, outperforming existing state-of-the-art methods on BeerAdvocate and HotelReview datasets.
Loading