Editorial: Explainable AI in Natural Language Processing

Published: 01 Jan 2024, Last Modified: 20 May 2025Frontiers Artif. Intell. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Research Topic.The study by Quan et al. is an attempt to develop trustworthy sentiment analysis systems by combining 14 attention-based analysis and the integration of external knowledge. In their approach, the authors propose 15 model training using a multi-task learning approach, augmenting with an attention mechanism to assign 16 scores to evidence. This multi-task approach reduces bias and provides a more comprehensive understanding 17 of the underlying sentiment. Furthermore, an external knowledge base is employed to retrieve complete 18 evidence phrases, thus enhancing the prediction rationality and loyalty of the model (evidence extraction).Usually, in multimodal settings, such as those involving the integration of vision and language (e.g. studied constructions in linguistics. The authors propose a way of testing the PLMs' recognition of the CC 32 that overcomes the challenge of probing for linguistic phenomena not lending themselves to minimal pairs.In their experiments, they employ BERT, RoBERTa, and DeBERTa as pretrained language models for their 34 understanding of the CC in zero-shot settings. This study observed that PLMs are able to recognize the 35 structure of the CC but fail to use its meaning in diverse NLP tasks.Nowadays, deep learning based NLP applications also influence the lives of nonexperts. This is the case, 37 for example, of a bank system with an NLP model that automatically denies a loan application. Therefore, The survey developed by Herrewijnen et al. is also in line with the human friendliness of an explanation 44 for non-technical users. This work aims to provide insight into the lessons learned in collecting and using 45 annotator rationales in NLP. To this end, the authors surveyed the use of annotator rationales in the field of
Loading