Linguistic Feature Representation with Statistical Relational Learning for Readability AssessmentDownload PDF

07 Jul 2023OpenReview Archive Direct UploadReaders: Everyone
Abstract: Traditional NLP model for readability assessment represents document as vector of words or vector of linguistic features that may be sparse,discrete, and ignoring the latent relations among features. We observe from data and linguistics theory that a document’s linguistic features are not necessarily conditionally independent. To capture the latent relations among linguistic features, we propose to build feature graphs and learn distributed representation with Statistical Relational Learning. We then project the document vectors onto the linguistic feature embedding space to produce linguistic feature knowledgeenriched document representation. We showcase this idea with Chinese L1 readability classification experiments and achieve positive results. Our proposed model performs better than traditional vector space models and other embedding based models for current data set and deserves further exploration.
0 Replies

Loading