Abstract: The challenge of cross-prompt automatic essay scoring (AES) task is to perform well on new unseen prompt essays when the scoring model has only been trained on seen prompt essays. Existing cross-prompt AES methods focus on obtaining essay representations with prompt-invariance between seen and unseen prompts to grade the unseen prompt essays. In this way, the prompt distribution that model learned are often unconscious, which means that they may often result in negative distributional shifts due to the lack of information from target prompts. In order to maximize the model’s distribution shift towards the target prompt direction, we propose optimizing the model’s training process to enable conscious prompt generalization. Specifically, we propose a novel meta-learning framework under prompt generalization setting. In our method, a meta-learner selection mechanism is proposed, which is designed to directly optimize the task scheduling strategy based on the status of the meta-learner. By introducing information from target prompts during the proposed optimization process, and leveraging this information to select meta-learning states most conducive to optimize the model towards the target direction, we achieve guiding the model to generalize towards the target prompt distribution. Besides, to enhance the diversity of meta-learning training tasks and further improve the model’s generalization ability, a data augmentation strategy based on large language models is designed. We conducted experiments on the ASAP dataset, the experimental results show that the proposed approach achieves a leading average result compared than other cross-prompt AES methods.
Loading