Abstract: Text embedding is essential for language understanding tasks. Large language models (LLMs) have recently emerged for text embedding due to their ability to capture meaningful knowledge. Leveraging text-based adventure games as a test bed, we explore the impact of different language models on Reinforcement Learning (RL) behavior. The results show that contrary to common assumptions, larger embedding models do not guarantee better performance over smaller model sizes. Instead, the optimal model size depends on the specific game environment.
Paper Type: Short
Research Area: Language Modeling
Research Area Keywords: Language Model, Reinforcement Learning
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 1306
Loading