AutoTA: A Dynamic Intent-Based Virtual Teaching Assistant for Students Using Open Source LLMs

Published: 2025, Last Modified: 13 Feb 2026IEEE Access 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Large Language Models (LLMs) are explored for their potential to transform education by serving as virtual teaching assistants, offering personalized support through human-like responses to tasks such as content-related questions and coursework guidance. In this study, we present a novel framework that leverages intent classification to enhance the effectiveness of LLMs in this role. Our framework, AutoTA, categorizes student queries into distinct topics— lecture discussions, homework assistance, and syllabus questions—triggering specific conversation chains tailored to each intent. Additionally, we incorporate a custom vector-space filter that refines responses based on filename tracking after intent identification. To evaluate the framework, we used course materials from the undergraduate-level CS course, Computer Incident Response, and compared the performance of several open-source LLMs, including Llama 3.1. Our results show that the framework accurately classifies intent and provides appropriate guidance, measured through quantitative and qualitative metrics. These findings highlight the potential of the proposed framework to enhance personalized learning and improve student engagement. While tested in a computer science course, the framework incorporates diverse assessment types that suggest potential for broader application.
Loading