- Keywords: Explainable Sentiment Analysis, Transformers, Extractive Summarization
- TL;DR: We propose two Transformer-based approaches for sentiment analysis with extractive summaries as decision explanation.
- Abstract: In recent years, the paradigm of eXplainable Artificial Intelligence (XAI) systems has gained wide research interest and beyond. The Natural Language Processing (NLP) community is also approaching this new way of understanding AI applications: building a suite of models that provide an explanation for the decision, without affecting performance. This is certainly not an easy task, considering the wide use of very poorly interpretable models such as Transformers, which in recent years are found to be almost ubiquitous in the NLP literature because of the great strides they have allowed. Here we propose two different methodologies to exploit the performance of these models in a task of sentiment analysis and, in the meantime, to generate a summary that serves as an explanation of the decision taken by the system. To compare the classification performance of the two methodologies, we used the IMDB dataset while, to assess the explainability performance, we annotated some samples of this dataset to retrieve human extractive summaries, benchmarking them with the summaries generated by the systems.