Vision and Language Integration Meets Multimedia Fusion: Proceedings of ACM Multimedia 2016 WorkshopOpen Website

2016 (modified: 03 Nov 2022)ACM Multimedia 2016Readers: Everyone
Abstract: Multimodal information fusion both at the signal and the semantics levels is a core part in most multimedia applications, including multimedia indexing, retrieval, summarization and others. Early or late fusion of modality-specific processing results has been addressed in multimedia prototypes since their very early days, through various methodologies including rule-based approaches, information-theoretic models and machine learning. Vision and Language are two of the predominant modalities that are being fused and which have attracted special attention in international challenges with a long history of results, such as TRECVid, ImageClef and others. During the last decade, vision-language semantic integration has attracted attention from traditionally non-interdisciplinary research communities, such as Computer Vision and Natural Language Processing. This is due to the fact that one modality can greatly assist the processing of another providing cues for disambiguation, complementary information and noise/error filtering. The latest boom of deep learning methods has opened up new directions in joint modelling of visual and co-occurring verbal information in multimedia discourse. The workshop on Vision and Language Integration Meets Multimedia Fusion has been held during the workshop weekend of the ACM Multimedia 2016 Conference and the European Conference on Computer Vision (ECCV 2016) on October 16, 2016 in Amsterdam, the Netherlands. The proceedings contain seven selected long papers, which have been orally presented at the workshop, and three abstracts of the invited keynote speeches. The papers and abstracts discuss data collection, representation learning, deep learning approaches, matrix and tensor factorization methods and graph based clustering with regard to the fusion of multimedia data. A variety of applications is presented including image captioning, summarization of news, video hyperlinking, sub-shot segmentation of user generated video, cross-modal classification, cross-modal question-answering, and the detection of misleading metadata of user generated video. The workshop is organized and supported by the EU COST action iV&L Net, the European Network on Integrating Vision and Language: Combining Computer Vision and Language Processing for Advanced Search, Retrieval, Annotation and Description of Visual Data (IC 1307--2014-2018).
0 Replies

Loading