LLM-Driven Composite Neural Architecture Search for Multi-Source RL State Encoding

Published: 23 Sept 2025, Last Modified: 22 Nov 2025LAWEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models (LLMs), Reinforcement Learning (RL), State Encoding, Neural Architecture Search (NAS), Representation Learning
TL;DR: This work proposes an LLM-driven pipeline for composite neural architecture search in RL state encoding, leveraging language priors to exploit side information and achieve higher sample efficiency on RL tasks such as mixed-autonomy traffic control.
Abstract: Designing state encoders for reinforcement learning (RL) with multiple information sources—such as sensor measurements and time-series signals—remains underexplored and often requires manual design. We formalize this challenge as a problem of composite neural architecture search (NAS), where multiple source-specific modules and a fusion module are jointly optimized. Existing NAS methods overlook useful side information about each module's representation quality, limiting their sample efficiency in this multi-source RL setting. To address this, we propose an LLM-driven NAS pipeline that leverages language-model priors over module design choices and representation quality to guide sample-efficient search for high-performing composite state encoders. On a mixed-autonomy traffic control task, our approach discovers higher-performing architectures with fewer evaluations than traditional NAS baselines and the LLM-based GENIUS framework.
Submission Type: Research Paper (4-9 Pages)
Submission Number: 107
Loading