From Dialogue to Mastery: Investigating Question-Asking and Interactive Learning with Large Language Models

ACL ARR 2024 August Submission342 Authors

16 Aug 2024 (modified: 05 Sept 2024)ACL ARR 2024 August SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: This paper investigates the potential of large language models (LLMs) to shift from passive data absorption to active, interactive learning through simulated student-teacher dialogues. We introduce a dataset of 1,322 contexts spanning domains like song lyrics, news articles, movie plots, academic papers, and images, and analyze conversational interactions to assess the ability of LLMs to gain knowledge about these contexts. Our findings show that interactive learning significantly boosts performance, with interactive student models surpassing static learning approaches in just four dialogue turns on average. However, student models still trail behind teacher models equipped with full context knowledge. To further assess learning dynamics, we introduce the Cumulative Information Coverage (CIC) metric, revealing that more insightful questions drive better outcomes, although rigid questioning patterns remain a limitation. These findings suggest that advancing interactive learning methods and extending machine learning theories could better capture the dynamics of conversational learning, paving the way for effective machine intelligence and educational technologies.
Paper Type: Long
Research Area: Dialogue and Interactive Systems
Research Area Keywords: interactive learning, large language models, conversational machine learning
Contribution Types: Model analysis & interpretability, Data resources
Languages Studied: English
Submission Number: 342
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview