Multimodal Representation Learning With Text and ImagesDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 06 Mar 2024CoRR 2022Readers: Everyone
Abstract: In recent years, multimodal AI has seen an upward trend as researchers are integrating data of different types such as text, images, speech into modelling to get the best results. This project leverages multimodal AI and matrix factorization techniques for representation learning, on text and image data simultaneously, thereby employing the widely used techniques of Natural Language Processing (NLP) and Computer Vision. The learnt representations are evaluated using downstream classification and regression tasks. The methodology adopted can be extended beyond the scope of this project as it uses Auto-Encoders for unsupervised representation learning.
0 Replies

Loading