A hybrid fusion-based machine learning framework to improve sentiment prediction of assamese in low resource setting
Abstract: In recent times, sentiment analysis works are dedicated to unimodal data and less effort has been paid to multimodal data. Due to the growth of multimedia, multimodal sentiment analysis is growing as one of the forefront research areas in natural language processing. However, the presence of multimodal information helps us to get a clearer understanding of the sentiment. However, multimodal sentiment analysis in a low-resource setting is yet to be explored for several resource-poor languages like Assamese. In this paper, we propose a hybrid fusion-based multimodal sentiment analysis framework for the Assamese news domain. We concentrate on the lexical features and the specific image objects to develop two individual semantic and visual models and predict the sentiment separately. Next, we combine the image and the text features by employing feature-level fusion to introduce a multimodal for joint sentiment classification. Finally, a decision/late fusion scheme is applied to three models, i.e., textual, visual, and multimodal systems, for final sentiment prediction. The hybrid fusion multimodal framework improves sentiment prediction performance over a single modality and feature-level multimodality of the Assamese news domain.
Loading