Abstract: Humans work together to solve common problems by having discussions, explaining, and agreeing or disagreeing with each other.
Similarly, if a system can have discussions with human partners when solving tasks, it has the potential to improve the system's performance and reliability.
In previous research on explainability, it has only been possible for systems to make predictions and for humans to ask questions about them, rather than having a mutual exchange of opinions.
This research aims to create a dataset and computational framework for systems that discuss and refine their predictions through dialogue.
Through experiments, we show that the proposed system can have beneficial discussions with humans, improving the accuracy by up to 25 points on a natural language inference task.
Paper Type: long
Research Area: Interpretability and Analysis of Models for NLP
Contribution Types: Model analysis & interpretability, Data resources
Languages Studied: English
Consent To Share Submission Details: On behalf of all authors, we agree to the terms above to share our submission details.
0 Replies
Loading