Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
On the Scaling of Polynomial Features for Representation Matching
Feb 12, 2018 (modified: Feb 12, 2018)ICLR 2018 Workshop Submissionreaders: everyone
Abstract:In many neural models, new features as polynomial functions of existing ones are used to augment representations. Using the natural language inference task as an example, we investigate the use of scaled polynomials of degree 2 and above as matching features. We find that scaling degree 2 features has the highest impact on performance, reducing classification error by 5% in the best models.
TL;DR:The use of appropriately scaled polynomial matching features improves classification accuracy in natural language inference.
Keywords:natural language inference, polynomial features, matching features, LSTM
Enter your feedback below and we'll get back to you as soon as possible.