Track: tiny / short paper (up to 4 pages)
Keywords: Reinforcement Learning; Code LLMs; Large Language Models; Evolutionary Search;
TL;DR: Discovering activation functions for RL using Evolutionary search and LLMs
Abstract: Deep Reinforcement Learning (DRL) has traditionally inherited activation functions from supervised learning, despite fundamental differences in learning dynamics and objectives. We present EvolveAct, a novel framework that leverages large language models and evolutionary search to automatically discover optimal activation functions for specific RL tasks. Our method combines genetic programming with code Large Language Models (LLMs) to explore a rich space of mathematical functions, optimizing for stability and performance in DRL training. Experimental results across multiple environments show that the discovered activation functions consistently outperform standard choices such as ReLU and TanH, improving final performance on the Minatar suite by 37.25% and 28.3% on the Brax suite on average. By jointly optimizing over multiple diverse environments, we discover activation functions that demonstrate strong generalization capabilities across different RL domains. This research provides a foundation for automating fundamental architectural choices in deep reinforcement learning systems.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 36
Loading