A Process-Level Method for Creativity Evaluation in LLM-Assisted Learning

ICLR 2026 Conference Submission21916 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM, Creativity assessment, Process-level evaluation
Abstract: Interpretable creativity assessment remains challenging, and the adoption of large language models (LLMs) in education amplifies issues of subjectivity and opacity. This study presents a process-level evaluation approach for LLM-assisted learning that attributes learner-versus-model contributions from multi-turn student–LLM dialogues and scores four expert-elicited dimensions with rationale texts. Using 1,273 cleaned dialogues from 81 undergraduates across multi domains, an auditable attribution protocol and an instruction-tuned evaluator are introduced to produce process-linked, interpretable rationales. Empirical evaluation with expert assessments indicates alignment with expert judgments. Claims are explicitly scoped to the studied tasks and domains, and code and evaluation scripts will be released for reproducibility.
Supplementary Material: zip
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 21916
Loading