VaiBot: Shuttle between the Instructions and Parameters of Large Language Models

ACL ARR 2025 February Submission199 Authors

04 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The interaction with LLMs through instructions has been extensively investigated in the research community. However, previous studies have treated the emergence of instructions and the training of LLMs on task data as separate processes, overlooking the inherent unity between the two. This paper proposes a novel neural network framework, VaiBot, that integrates VAE and VIB, designed to uniformly model, learn, and infer both instruction \emph{deduction} and instruction \emph{induction} tasks of LLMs. Through experiments, we demonstrate that VaiBot performs on par with existing baseline methods in terms of deductive capabilities while significantly surpassing them in inductive capabilities. We also find that VaiBot can scale up using general instruction-following data and exhibits excellent one-shot induction abilities. We finally synergistically integrate the deduction and induction processes of VaiBot for the task of \emph{inductive reasoning}. Through t-SNE dimensionality reduction, we observe that its inductive-deductive process significantly improves the distribution of training parameters, enabling it to outperform baseline methods in inductive reasoning tasks. The code and data for this paper can be found at https://anonymous.4open.science/r/VaiBot-021F.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: knowledge inducing, representation learning
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Publicly available software and/or pre-trained models, Data resources, Data analysis, Theory
Languages Studied: English
Submission Number: 199
Loading