Vision and Feature Norms: Improving automatic feature norm learning through cross-modal mapsDownload PDF

2016 (modified: 16 Jul 2019)HLT-NAACL 2016Readers: Everyone
Abstract: Property norms have the potential to aid a wide range of semantic tasks, provided that they can be obtained for large numbers of concepts. Recent work has focused on text as the main source of information for automatic property extraction. In this paper we examine property norm prediction from visual, rather than textual, data, using cross-modal maps learnt between property norm and visual spaces. We also investigate the importance of having a complete feature norm dataset, for both training and testing. Finally, we evaluate how these datasets and cross-modal maps can be used in an image retrieval task.
0 Replies

Loading