Abstract: Over the years there has been a lot of speculation with respect to single modal sentiment analysis of twitter (which is one of the world’s largest micro blogging platforms) data i.e. either text or image mining. But unfortunately most of the researchers didn’t use the non-trivial elements such as memes (i.e. combination of image and text data) and GIFs (i.e. combination of video and audio data) which dominate the twitter world today. Hence looking at the limitations of the existing systems we proposes a novel framework called “Multimodal Twitter Sentiment Analysis using Feature Learning” which defines the polarity of the tweets by considering all types of data such as text, image and GIFs. The framework consist of three main modules i.e. Data Collection for gathering real-time tweets using twitter streaming API, Data Processing module which has context-aware hybrid algorithm used for text sentiment analysis and ‘Fast R-CNN’ for image sentiment analysis. GIFs are handled using an optical character recognizer which separate texts from images for defining the polarity and finally multimodal sentiment scoring is done by aggregating polarity scores of images are texts. Evaluation results of proposed framework shows accuracy of 96.7% against SVM and Naïve Bayes which outperforms the single modal sentiment analysis models.
Loading