Keywords: Automated Essay Scoring, Decision-level Ordinal Modeling, Gate Fusion, Multimodal Large Language Models
Abstract: Automated essay scoring (AES) predicts multiple rubric-defined trait scores for each essay, where each trait follows an ordered discrete rating scale.
Most LLM-based AES methods cast scoring as autoregressive token generation and obtain the final score via decoding and parsing, making the decision implicit.
This formulation is particularly sensitive in multimodal AES, where the usefulness of visual inputs varies across essays and traits. To address these limitations, we propose Decision-Level Ordinal Modeling (DLOM), which makes scoring an explicit ordinal decision by reusing the LM head to extract score-wise logits on predefined score tokens, enabling direct optimization and analysis in the score space.
For multimodal AES, DLOM-GF introduces a gated fusion module that adaptively combines textual and multimodal score logits.
For text-only AES, DLOM-DA adds a distance-aware regularization term to better reflect ordinal distances.
Experiments on the multimodal EssayJudge dataset show that DLOM improves over a generation-based SFT baseline across scoring traits, and DLOM-GF yields further gains when modality relevance is heterogeneous.
On the text-only ASAP/ASAP++ benchmarks, DLOM remains effective without visual inputs, and DLOM-DA further improves performance and outperforms strong representative baselines.
Paper Type: Long
Research Area: Multimodality and Language Grounding to Vision, Robotics and Beyond
Research Area Keywords: Automated Essay Scoring, Decision-Level Ordinal Modeling, Gate Fusion, Multimodal Large Language Models
Contribution Types: Model analysis & interpretability
Languages Studied: Englisgh
Submission Number: 10524
Loading