Modeling and Learning Semantic Co-Compositionality through Prototype Projections and Neural Networks
Abstract: We present a novel vector space model for semantic co-compositionality. Inspired by Generative Lexicon Theory (Pustejovsky, 1995), our goal is a compositional model where both predicate and argument are allowed to modify each others’ meaning representations while generating the overall semantics. This readily addresses some major challenges with current vector space models, notably the polysemy issue and the use of one representation per word type. We implement cocompositionality using prototype projections on predicates/arguments and show that this is effective in adapting their word representations. We further cast the model as a neural network and propose an unsupervised algorithm to jointly train word representations with co-compositionality. The model achieves the best result to date (ρ = 0.47) on the semantic similarity task of transitive verbs (Grefenstette and Sadrzadeh, 2011).
0 Replies
Loading