Developing Explainable AI Systems to Support Feedback for Students

Published: 12 Jul 2024, Last Modified: 01 Oct 202417th International Conference on Educational Data MiningEveryoneRevisionsCC BY 4.0
Abstract: We present a research plan focused on developing explainable AI systems powered by large language models (LLMs) to provide safe, reliable, and effective feedback for students. The research integrates several early-stage exploratory analyses, including understanding differences between human-written and AI-generated feedback, building a taxonomy of effective feedback types, examining qualities of peer discourse that support engagement, and investigating student interactions with LLMs. The overarching goal is to create systems that validate LLM-generated content and improve the way feedback is provided through generative AI in educational contexts. The methodology involves a series of interconnected studies utilizing natural language processing, machine learning, and qualitative research techniques. By addressing the current limitations and concerns surrounding AI in education, this work aims to contribute to the responsible integration of AI-powered tools that genuinely support learning and complement human instruction, ultimately promoting educational equity and student success.
Loading